When replicating a table which has an identity column I get the error: "Procedure cp_insert_tblname expects parameter @C1, which was not supplied.". The stored procedure appears to be called without any parameters so my insert stored procedure does not work. I know I'm missing something basic here!! Do I have to add the field names when telling replication to use a custom stored procedure. If not, how do arguments get passed to my SP, as globals somehow?
Client is running X- version of application and corresponding database size is huge. Now client's vendor is releasing Y-version of same application with many database schema changes (like new tables added, new columns added, renamed existing columns and etc) To upgrade to the Y-version, vendor is suggesting to my client that down the system and do the upgrade for application/database to Y-version. We are sure that this process will take days together to upgrade to the Y-version. My client is not ready to down the system for that long. So we are trying to find the solution with minimal down time.The approach we are thinking is,
1) Create the replicated database to another server (server2) from production server(server1) using golden gate with X-version
2) Create new tables/schema updated tables from Y-version database on same server1. Here for Updated schema tables we are planning to use the name <table_name_Y_version> as the same table name exists in X-version.
3)With above 2 steps, golden gate replicate the changes from production to server1 and server1 will have the new Y-version table schema (with different concatenate name ' _Y_version'). BTW , there is no affect for the production
4) At this stage we are planning to find best approach, to fill the '<table_name>_Y_version' from X-version tables. two challenges here a) all data needs to be moved to Y-version tables b) they have to sync data in real time.
we thought of going to
a) ssis package to pump the data to Y-version tables, but real time data will not sync.
b) trigger based technique, previous experience said, lot of load
when i do a snapshot i have it set up to truncate before inserting. As a result I'm getting an error saying that it cant truncate a table reference in an indexed view. What settings should i use to allow for a snapshot in this instance? Should i manually drop the databinding then snap then recreate the databinding? there has to be a better way
Is there a way that you could get the column names for each table in a database using 1 query? something like: tbl colname t1 catID t1 catName t2 prodID t2 prodDesc t3 cartID ... ...
I know it would be long, but I would just be searching through the saved output for specific names.
I have a situation in which i have to get the last value stored in the Primary Key for all the tables. Based on this value i have to update another table which stores the Table names and the last Key value for the table. The values in this table are not correct therefore i have to update it now. I was trying to write a cursor for this but the only problem is i can't get to know how to get the column name on which the primary key is defined for all the tables, thru code.
I would appreciate if someone could help me out with this.
If you need to inner join 2 tables that have some columns names that are the same, how can you have those columns be named differently in the query result without aliasing them individually?
Tried select a.*,b.* from tbldm a,tblap b where a.id=b.id hoping the col names in the result would have the a.s and b.s in front of them but they didn't.
From the INFORMATION_SCHEMA.TABLES view I want to return the TABLE_NAME of tables that have columns say, named Email and EmailStatusId. Is it possible to do this with a single select statement or would I have to use two selects for this?
I have two different tables... one for all Staff, and another for all Temp Staff. I need both to output to a datagrid, and so I need to grab both tables from a SQL query to output to my datagrid, but I can't seem to get the logic right for it to work. Can someone give me some suggestions on why my results are blank when I'm running this query? I thought a simple join would allow both sets of identical column names to coexist in peace...SELECT TOP 100 PERCENT dbo.StaffDirectory.UserName, dbo.StaffDirectory.LastName, dbo.StaffDirectory.FirstName, dbo.StaffDirectory.Dept, dbo.StaffDirectory.Title, dbo.StaffDirectory.EMail, dbo.StaffDirectory.LocationFROM dbo.StaffDirectory INNER JOIN dbo.TempStaff ON dbo.StaffDirectory.Location = dbo.TempStaff.Location AND dbo.StaffDirectory.EMail = dbo.TempStaff.Email AND dbo.StaffDirectory.Title = dbo.TempStaff.Title AND dbo.StaffDirectory.Dept = dbo.TempStaff.Dept AND dbo.StaffDirectory.FirstName = dbo.TempStaff.FName AND dbo.StaffDirectory.LastName = dbo.TempStaff.LName AND dbo.StaffDirectory.UserName = dbo.TempStaff.UName AND dbo.StaffDirectory.MDNo = dbo.TempStaff.MDNoIs something wrong here? It just doesn't work =(Any suggestions would be really appreciated.Thank you
I'm not a full-time DBA, so excuse my style of expressing my question.I have a database which has 2 tables in SQL 2005. Both these tables have similar column names, EXCEPT for new extra columns in FY2007_DATA. I can visually see the difference in columns in Database Diagrams. My goal is to :- I want to compare FY2007 tbl column names to FY2006 column names and display only those columns as results that do not match.Tbl 1 :- FY2006_DATA Tbl 2:-
FY2007_DATA
With online reading and help I have managed to get this script to do exactly opposite of what i want. Below is the query
/* This query compares the column names from two tables and displays the ones that have an exact match. It does not care for case-sensitiveness */
Select a.Table_Name, a.Column_Name, (b.Table_Name), (b.column_name) From [2006-2011].INFORMATION_SCHEMA.Columns AS a Join [2006-2011].INFORMATION_SCHEMA.Columns AS b on a.Column_Name = b.Column_Name Where a.TABLE_NAME = 'FY2006_DATA' And b.TABLE_NAME = 'FY2007_DATA' AND a.Column_Name IN (Select Column_Name = LEFT(c.column_name, 20) FROM [H1B_2006-2011].INFORMATION_SCHEMA.Columns AS c WHERE c.TABLE_NAME = 'FY2007_DATA' )
When I change "AND a.Column_Name IN.." to "AND a.Column_Name NOT IN.." so that the results will (should) display the extra columns in FY2007, in fact I do not see any results, but query executes perfect.
I am trying a create views that would join 2 tables:
Table 1: Has all the columns need by a view ( Name: Product Structure: ID, Attribute 1, Attribute 2, Attribute 3, Attribute 4, Attribute 5 etc Table 2: Is a lookup table that provides the names of columns Name: lookupTable Structure: tableName, ColumnName, columnValue Values: Product, Attribute1, Color Product, Attribute2, Size Product, Attribute3, Flavor Product, Attribute4, Shape
I'm in the process of converting a rather huge VSAM database into a set of SQL tables. I am using the same data names from the mainframe (like XDB-NAME to RDB-NAME). I load the files using Import Export Data and it makes the tables with such column names as col001, col002, col003, etc... and always sets the data types to varchr(255). And I have to cut and paste the data names from the manframe side to the server side (and the data types to.) So, is there an easier way to do this? Or am I doomed to cut-n-paste my days away... Thanks for any help.
Ok, i am finally giving in on this one and asking for some help! I am trying to set up a T-SQL Statement that will extract data from all the tables in the current database to a csv file including Column names!
I know that bcp can not handle the column names, so i tried to get around this with an append of the column names from a select, but unfortunatly the select gives me the names in Alpha order and not the order of the fields.
I have tried putting in an order by on the select, but this does not seem to have any effect. I have included the snippet of my script that is causing the problem here :
-- set up the echo command select @colcommand= 'exec master..xp_cmdshell' + ' ''' + 'echo ' + @names + ' >> c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + '''' from INFORMATION_SCHEMA.TABLES where TABLE_NAME=@TABLE
and just in case you are interested in the rest of the script, the full monster is included at the bottom of the post. Also if you can see any more efficient ways of doing what i am trying to do, please let me know!
-- Script to create a csv file of data from all tables inside current database
-- declare all variables declare @command varchar(200) -- command used for bcp declare @fetch_status int-- variable for fetch status in cursor
declare @TABLE varchar (200)-- Variable to hold table name declare @colcommand varchar (200)-- Variable to hold column creation command declare @count int-- Variable used to determine first itteration of Column loop declare @names varchar(100) -- variable used for the column names declare @delimiter varchar(10)-- variable used for delimiter in column names
SET @delimiter = ','-- set up the delimiter to comma select @count=0-- initialises the COUNT variable
-- setup cursor to create the bcp command to backup the data files to csv format declare bcpcommand cursor READ_ONLY FOR select 'exec master..xp_cmdshell' + ' ''' + 'bcp' + ' ' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + ' out' + ' c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + ' -c -t,' + ' -T' + ' -S' + @@servername + ''''from INFORMATION_SCHEMA.TABLES where TABLE_TYPE = 'BASE TABLE'
-- setup cursor to pick up all the tables in the given database (used for column names section) declare dbtables cursor READ_ONLY FOR select TABLE_NAME from INFORMATION_SCHEMA.TABLES where TABLE_TYPE = 'BASE TABLE'
open bcpcommand
select @fetch_status=0
while @fetch_status=0 begin fetch next from bcpcommand into @COMMAND select @fetch_status=@@fetch_status if @fetch_status<>0 begin continue end
-- print 'Command to be run : ' + @COMMAND EXEC (@COMMAND) end
-- close and tidy up close runme deallocate runme
-- now create the fieldname files and then echo the 2 files together!
open dbtables select @fetch_status=0 while @fetch_status=0 begin fetch next from dbtables into @TABLE select @fetch_status=@@fetch_status
if @fetch_status<>0 begin continue end
SELECT @names = COALESCE(@names + @delimiter, '') + name FROM syscolumns where id = (select id from sysobjects where name=@TABLE)
-- due to the concatonation used, the second itteration onwards has a , attached to the front of the line -- this section removes the first char if @count <> 0 begin Select @names=SUBSTRING(@names,2,198) end
-- set up the echo command select @colcommand= 'exec master..xp_cmdshell' + ' ''' + 'echo ' + @names + ' >> c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + '''' from INFORMATION_SCHEMA.TABLES where TABLE_NAME=@TABLE
-- print 'COMMAND : ' + @colcommand
exec (@colcommand)
-- reset @names variable for next itteration, and set count to 1 to trigger IF above select @names='' select @count=1 end
-- close and tidy up close dbtables deallocate dbtables
I'm working on my first data warehouse and I'm not sure how I should name the columns in the database.
The first phase of the data warehouse is to store a bunch of data from one third party source. The source contains over 100 pieces of data and the business user doesn't even know what some of the fields are but he wants to store everything. The third party refers to the each field with a somewhat cryptic short name and a longer description. The short name isn't always cryptic.
My question is am I better off naming my columns the same as the source system's short name so that I can easily debug problems later? Should I instead try to shorten their definition into something meaningful? On a side note, I'm 100% positive that we'll never populate the tables in questions with data from an additional source.
I need to migrate data from one sql database to another. The second DBis a newer version of the "old" database with mostly the same tablesand fieldnames. In order support some reporting queries in the "new"version I needed to change the datatype of a few fields from varchar toint(the data stored was integers already as they were lookup tables).DTS works great except in the cases of about 10 fields which I changedthe datatypes on from varchar to int.DTS seems to drop the data if the fieldname and datatype are not anexact match. Is there any way to use DTS and have it copy data from afield call subsid type varchar to a field call subsid type int?
I have a SQL text column from SP_who2 in table #SqlStatement:
like 1row shown below :
"update Panel set PanelValue=7286 where PanelFirmwareID=4 and PanelSettingID=9004000"
I want to find what table and column names are in the text ..
I tried like below ..
Select B.Statement from #sp_who2 A LEFT JOIN #SqlStatement B ON A.spid = B.spid where B.Statement IN ( SELECT T.name, C.name FROM sys.tables T JOIN sys.columns C ON T.object_id=C.object_id WHERE T.type='U' )
Something like this : find the column names and tables name
I was wondering if anyone has an idea of how we could find the table names and column names of the tables in our Sql server database at runtime/dynamically given our connection string? Please let me know.
I created a ssis package which exports the data from oledb source to flat file (csv format). For this i have OLEDB source and Flat File as destination. I generate the file and filename dynamically with the column names in the first row. So if the dynamically generated file name already exists , then i want to append the data in the same existing file. But I dont want to append the column names again. I just want to append the rows to the existing rows.
so lets say first time i generate a file called File1_3132008.csv.
Col1, Col2 1,2 3,4
After some days if my ssis package generates the same file name i.e. File1_3132008.csv, this time i just want to append the rows to the existing file. So the file should look like this- Col1, Col21,23,45,67,8
But instead my file looks like this if i set Overwrite propery to false
Col1,Col2 1,2 3,4 Col1,Col2 5,6 7,8
Can anyone help me to get the file as shown in the highlighed
I have a number of "join" tables ie joins records from two other tables for example, an employee may be responsible for more than one product so the join table would look like this:
table name: employee_products Employee_id foreign key from employee table product_id goreign key from products table
My question is, how do I replication this table? Replication requires all table to have a primary key field. In this case, both fields are foreign keys and I dont have a primary key as the same data appears regularly in either field.
How should I get around this so I can implement replication? I dont want to have to add another field to be the primary key field.
I have a table that is used for reporting, the problem is that the data in the table is refreshed every 30 minutes with a bulk insert. I am trying to find a way to have two tables that are mirror images of each other and when the loading table is loaded, then the table assumes the identity of the reporting table. The basic prinicpal is I need to have the table be available almost all the time and when the bulk insert is happening, users cannot query. Any help would be greatly appreciated.
Hi, Please let me know if it is possible to replicate a table with identity property defined in it. Both the publisher and subscriber tables have identity property defined. Which option should be used while setting up transactional replication to allow the identity values at the publisher pushed to the subscriber, which also has identity property defined? Not for replication option with the identity property also fails. Whichever option I choose I get the error, 'An explicit value for the identity column in table 'jobs_id_nfr' can only be specified when a column list is used and IDENTITY_INSERT is ON.' This works only if the identity property is not defined at the subscriber. But, I need to have the identity property defined at the subscriber also because the subscriber should be an exact copy of production.
OLE DB source which calls a stored proc that returns a result set
data conversion
Excel destination I am in design mode in Business Intelligence studio. My excel destination (with an Excel Connection) shows no sheet name though I have an execute SQL task before the data flow to create the excel table called SHEET1. Needless to say, there are no output columns visible to do any mappings. I did go to the ExcelConnection to set the OpenRowset Property to SHEET1 but it seems to have no effect.
I can do the export in SQL Server Management studio and that works fine, but it is basic and does not meet my requirements. I have to customize the package to allow dynamic Excel filenames based on account names and have to split my result set into multiple excel sheets because excel 2003 has a max of 65536 rows per sheet. Also when I use the export wizard, I have the source as a table and eventually the source has to be a stored proc with input parms.
What am I missing or doing wrong? Thanks in advance
I would like to add a column to a published table but not have that column replicated to subscribers. I can accomplish this via the UI by adding the column and then unchecking it. This adds the column to the publisher table but does not replicate it to the subscriber.
I am looking for a programmatic method to add a column to the base table and unmark it for replication.
I am replicating an 80GB database between NY can CT and would like toknow why table sizes are different between the two.Here is an example of sp_spaceused::NY IOI_2007_04_23 rows(279,664) reserved(464,832)data(439,960) index_size(24,624)CT IOI_2007_04_23 rows(279,666) reserved(542,232)data(493,232) index_size(48,784)Thanks,
I am using the following select statement to get the row count from SQL linked server table.
SELECT Count(*) FROM OPENQUERY (CMSPROD, 'Select * From MHDLIB.MHSERV0P')
MHDLIB is the library name in IBM DB2 database. The above query gives me only the row count of table MHSERV0P. However, I need to get the names, rowcounts, and sizes of all tables that exist in MHDLIB librray. Is it possible at all?
I need to copy all the data from all the tables in a database to a copy of this database on another server. What feature of SSIS should I take advantage of to accomplish this?
We have an SLA for 8am, most times the data warehousing jobs complete at 8:05am. Adding an additional process/set of tasks to this package would obviously make matters so I'm trying to update/copy/replicate the data in the fastest manner. Typically we're talking 2 marts (10-20GB) with 2 large tables (5-10 mill records) and 20 marts (0.5 - 5 GB) with many more smaller tables (~40 tables with record count ranging from 1 to a million)
Additionally please indicate if the design/feature you suggest can handle (pushing schema changes and additions to the target server) schema changes or new tablesviews added to the source database.
My only idea so far...is using the import wizard (in Management Studio) to create an SSIS package (top copy all the tables from one server to another) and saving it to the server, Then executing this package after the job is complete. However this would not work if the schema of a table changed, or if a a table is added. Moreover I don't think I can edit this package in visual studio.
We need to insert data/rows from a SQL Server 2014 database into MS Access database. The problem is, there are so many columns (100+) in the table and there are so many insert transactions of this kind (from different tables) that it is not very easy to write the code in VB.NET that lists all column names.
Both the Access and SQL Server tables have the same number of columns and the equivalent data types, so inserting is not really the problem. It's just that is there a way to do an insert statement in T-SQL that does not name all the columns?
Firstly I consider myself quite an experienced SQL Server user, andamnow using SQL Server 2005 Express for the main backend of mysoftware.My problem is thus: The boss needs to run reports; I have designedthese reports as SQL procedures, to be executed through an ASPapplication. Basic, and even medium sized (10,000+ records) reportingrun at an acceptable speed, but for anything larger, IIS timeouts andquery timeouts often cause problems.I subsequently came up with the idea that I could reduce processingtimes by up to two-thirds by writing information from eachcalculationstage to a number of tables as the reporting procedure runs..ie. stage 1, write to table xxx1,stage 2 reads table xxx1 and writes to table xxx2,stage 3 reads table xxx2 and writes to table xxx3,etc, etc, etcprocedure read final table, and outputs information.This works wonderfully, EXCEPT that two people can't run the samereport at the same time, because as one procedure creates and writesto table xxx2, the other procedure tries to drop the table, or read atable that has already been dropped....Does anyone have any suggestions about how to get around thisproblem?I have thought about generating the table names dynamically using'sp_execute', but the statement I need to run is far too long(apparently there is a maximum length you can pass to it), and evenbreaking it down into sub-procedures is soooooooooooooooo timeconsuming and inefficient having to format statements as strings(replacing quotes and so on)How can I use multiple tables, or indeed process HUGE procedures,withdynamic table names, or temporary tables?All answers/suggestions/questions gratefully received.Thanks
I am very new to SQL Server. Plenty of SQL knowledge but the whoe SQL server enviornment is new.
I am working with SQL Server 2005. My task is to generate reports without affecting our live database. I have setup a second server and installed SQL Server 2005 on that too. My thought was that maybe I could mirror or replicate the table I require over to this new server and run my queries from here. Is this easy to do ?
I read that mirroring might not work as it is solely for back up /fall over purposes and that data on the mirrored server would not be accessible.
I have also been looking at SSIS but at the moment this is all a bit like double dutch to me ! Can anyone point me in the right direction, preferably somewhere beginner friendly ie not overly complicated !!