Server C - Stand Alone (also used for reporting, but, not using SQL Reporting).
All the three servers are interconnected using linked servers.
Here€™s the brief overview of my problem:
A certain procedure requires me to get live data from the 'Primary' to Server C on regular basis. This procedure will be scheduled to run once a day on Server C.
Here€™s the code:
create procedure sp_getDataFromPrimary
as
--First I check if Server A is primary
if exists( select * from [serverA].master.sys.database_mirroring where name = 'database1' and
mirroring_role_desc = 'primary')
begin
insert into [serverC].[database1].[dbo].[table1]
select * from [serverA].[database1].[dbo].[table1]
end
else
begin
insert into [serverC].[database1].[dbo].[table1]
select * from [serverB].[database1].[dbo].[table1]
end
go
When I try to create the above procedure, I get an error message similar to 'cannot access
[server B].[database1].[dbo].[table1] as the database is being mirrored. I tried to use variables for the server, database and tables, but, no avail -- I still get the same error.
My qn now is : How can I make this work. If you think my approach is faulty, pls, suggest any other.
Eagerly awaiting for your response.
Thank you.
p.s: Cannot use SSIS for it will complicate the other additional processing that needs to be done.
Hi Everone, I need help on how to produce a SQL Server Database Primary Data File, i have read the Visual C# 2006 book by Deitel and Deitel unfortunately the book does not say how to produce the forementioned data file. The software that i currently have to help me produce this data file is Visual C#, J#, Web Development, SQL Server 2005 and Office 2007.
I have a good understanding of databases from studying SQL and Oracle but i am not sure which is the best software to use to produce the Primary Data File, ideally i would like to use Access from my Office 2007 package.
Many thanks for your help in resolving this problem.
Hello guys My server crashed but luckily I was not able to get back my files with help of recovery software. Now, all I have from the database are just sql server database primary data .mdf and sql server databaseTransaction Log Files .ldf. I need to restore these data back to sql server.
Please could someone tell me how to restore these two file types back to my sql sever 2007 database? Thanks netboy
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).
I have been creating databases in SQL 2008 with a primary filegroup for the system objects and a secondary, marked Default, for the data.
We are preparing a migration to SQL 2014, and the administrator is complaining he won't adopt this structure on the new servers because 'there is no benefit' and 'a backup cannot be restored (!?)'.
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
We have a table, which has one clustered index and one non clustered index(primary key). I want to drop the existing clustered index and make the primary key as clustered. Is there any easy way to do that. Will Drop_Existing support on this matter?
We have three servers. Server A - Primary Server B - Mirror Server C - Stand Alone (also used for reporting, but, not using SQL Reporting). All the three servers are interconnected using linked servers.
Heres the brief overview of my problem:
A certain procedure requires me to get live data from the 'Primary' to Server C on regular basis. This procedure will be scheduled to run once a day on Server C.
Heres the code:
create procedure sp_getDataFromPrimary as
--First I check if Server A is primary if exists( select * from [serverA].master.sys.database_mirroring where name = 'database1' and mirroring_role_desc = 'primary')
begin
insert into [serverC].[database1].[dbo].[table1] select * from [serverA].[database1].[dbo].[table1] end else
begin
insert into [serverC].[database1].[dbo].[table1] select * from [serverB].[database1].[dbo].[table1] end
go
When I try to create the above procedure, I get an error message similar to 'cannot access [server B].[database1].[dbo].[table1] as the database is being mirrored. I tried to use variablesfor the server, database and tables, but, no avail -- I still get the same error.
My qn now is : How can I make this work. If you think my approach is faulty, pls, suggest any other.
Eagerly awaiting for your response.
Thankyou.
p.s: Cannot use SSIS for it will complicate the other additional processing that needs to be done.
i have a employee table with various columns.... here i have emp_code as my primary key
now my concern is which data type is to be associated with the primary key(emp_code)
emp_code int not null, OR emp_code varchar not null,
wht i understand is to go for the simpler one....i.e. int, since it would be easy to deal with int data
now my basic question here is while creating tables which data type is to be assigned to emp_code (int or varchar) and on wht basis.......since both is possible...
When I export data from database A tot database B and I look at the copied tables in database B, all the primary keys are gone and the field is now a "normal" field ?
How can I keep the primary key in the copied tables ?
I have created an SSIS package that takes data from a very large table (301 columns) and puts it in a new database in smaller tables. I am using views to control what data goes to the new tables. I also specified that it drop the destination table and recreate it prior to copying the data. The reason for this is so that old data removed from the larger database will get removed from the normalized databases.
I have 2 things I am trying to figure out..
1. I would like to have the package set a specific row in each new table to be the primary key (this will allow us to use relationships when querying the data).
2. I decided I wanted to sort the data as it copies. I am using the BI Visual Studio for my editing. In the Data Flow view I cannot seem to disconnect the output from the Source block so I can connect it to the Sort block and then feed that to the output block. What am I missing here?
Hi, I'm not user to inserting data into databases, usually I just read the data. So I think my problem might be pretty common.I have a table of longitudes, latitudes, city names, and country names. I set the primary key to be the columns longitude and latitude. I have a method that generates the user's location and the mentioned data. So I want to only insert the new data into the database if it is new and unique. currently if the same user goes to my site, it inserts the data fine the first time and then throws and error the second time because it is inserting duplicate primary key information. Do I need to query the database to see if the data record already exists? or is there a way to insert the record only if it is "new"?? Thanks for the help!!
Hi, How do I delete data which is a Foreign Key in another table? For example; string query = "DELETE * from user_details WHERE user_ID = '" +userID.Text+ "'; The user_ID is the Primary Key in the user_details table and also a Foreign Key in other tables. Thank you. (:
Is there anyway to backup the current online log after complete loss of the current correponding datafile?
Example:
(2) Logical Disk Volumes
Disk 1 (D:) contains pubs_data.mdf Disk 2 (E:) contains pubs_log.ldf
Disk 1 becomes corrupt and goes offline leaving the the database pubs in a suspect state. Is backup of the current online log pubs_log.ldf possible? If a backup of the log is not possible are there any other restoration methods that can be used to bring this database back online, rescuing as much information as possible from the current online log pubs_log.ldf.
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
I have a table that has a primary key that is auto incremented by 1. This table's data is cleared out periodically and as data gets added the auto id primary key continues to increase in numeric value. Once the data is cleared from the table the auto id names could be used again(the eventId is not stored) Currently the eventID is at 26,581,399. I know the maximum int value is 2,147,483,647.
How should I handle this? or rebuild the table every time the data is cleared(problematically)?
Is there anyway to get the order in which data to be import on to tables when they have primary and Foreign Key relations?
For ex:We have around 170 tables and when tries to insert data it will throw error stating table25 data should be inserted first when we insert data in table 25 it say 70 like that.
We have a large table which is very old and not much ppl take care about, recently there is a performance problem from the report need to query to this table. Eventally we find that this table have primary key missing and there is duplicate data which make "alter table add primary key" don't work
Besides the data size of this table require unacceptable time to execute something like "insert into new_table_with_pk from select distinct * from old table"
Do you have any recommendation of fixing this? As the application run on oracle , sybase and sql server, is that cross database approace will work?
I have a datasource view DSV1. It points to a datasource DS1 that is considered the "primary".
I have created a Report Model that uses DSV1 (and thus uses DS1)
I created a new datasource, DS2 that I would like to use instead of DS1. (I can't just modify DS1 because if I modify it, it will overwrite it when we go to our Production environment and break that datasource)
So, I can go into DSV1 and change all the references from DS1 to DS2.
But that's where the problem lies.
When I try to build, I get the following error:
"The Table property of the Entity "E1" refers to the Table "dbo_View", which is not in the primary data source."
Somehow, the entity is tied to the "primary" datasource. When I change it back to DS1, everything works fine. Any thoughts? What can I do?
Uma writes "Hi Dear, I have A Table , Which Primary key consists of 6 columns. total Number of Columns in the table are 16. Now i Want to Convert my Composite Primary key into simple primary key.there are already 2200 records in the table and no referential integrity (foriegn key ) exist.
may i convert Composite Primary key into simple primary key in thr table like this.
I currently have a SQL Server cluster setup with a Primary DB Server SERVER1 and the Standby server SERVER2. SERVER1 has been failing more than normal is the past few weeks and its takes upto 5 mins for SERVER2 realize that SERVER1 is down. I am looking for a better way to implement a backup server on production with minimum downtime. Please adivse..
I need some clarification about adding file in to mirrored dataabse in primary server without downtime and breaking the mirror server.
In our environment we are using monutdisks in both the servers. in primary for ex we have F drive for data files under mount disk 3 in mirror server also we have same drive but in mount drive2.
As per my knowledge if it is same drives we can add the ndf files in the primary that will reflect on mirror. but in current situation i am confusing about mount points with different names.
I use the following 3 sets of sql code in SQL Server Management Studio Express (SSMSE) to import the csv data/files to 3 dbo.Tables via CREATE TABLE & BUKL INSERT operations:
-- ImportCSVprojects.sql --
USE ChemDatabase
GO
CREATE TABLE Projects
(
ProjectID int,
ProjectName nvarchar(25),
LabName nvarchar(25)
);
BULK INSERT dbo.Projects
FROM 'c:myfileProjects.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO ======================================= -- ImportCSVsamples.sql --
USE ChemDatabase
GO
CREATE TABLE Samples
(
SampleID int,
SampleName nvarchar(25),
Matrix nvarchar(25),
SampleType nvarchar(25),
ChemGroup nvarchar(25),
ProjectID int
);
BULK INSERT dbo.Samples
FROM 'c:myfileSamples.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO ========================================= -- ImportCSVtestResult.sql --
USE ChemDatabase
GO
CREATE TABLE TestResults
(
AnalyteID int,
AnalyteName nvarchar(25),
Result decimal(9,3),
UnitForConc nvarchar(25),
SampleID int
);
BULK INSERT dbo.TestResults
FROM 'c:myfileLabTests.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO
======================================== The 3 csv files were successfully imported into the ChemDatabase of my SSMSE.
2 questions to ask: (1) How can I designate the Primary and Foreign Keys to these 3 dbo Tables? Should I do this "designate" thing after the 3 dbo Tables are done or during the "Importing" period? (2) How can I set up the relationships among these 3 dbo Tables?
Inside my sp on the Primary server are 3 commands:
exec msdb..sp_start_job @job_name = N'JobA'                                     -- Running on Primary ServerEXEC Server2.msdb.dbo.sp_start_job @job_name = N'JobB';Running on Secondary ServerSelect * from Server2.Table                                 -- Running on Secondary Server
So far the commands are working just fine when I kick them off one at a time. But when I kick off the sp as a whole I notice that the timing just isn't keeping up. For instance, Step 3 is running before Step 2 is finished. It appears that as the Primary Server kicks off Step 2, it doesn't seem to care about the outcome and moves right to Step 3 which in turn provides me erroneous data.How can I get Step 3 to wait until Step 2 is actually finished keeping in mind the Secondary server is part of this equation?
Hi Everyone I had a table in SQL Server database with data.. By mistake I forgot to put the primary key (composite Key) in the table.. Now I want to create a composite key. But the duplicate values in those fields (say 2 fields) doesn't allow me to set the composite key in the corresponding table. Is there any way to do it? Please help me...
I would like to know how do I carry out this two cases in efficient way:
1. I have two SQL 2005 servers running. One as primary, one as secondary. I would like to synchronize or replicate the transactions at real-time.
2. When primary goes down, I would like my secondary server to take place in needless of IP changes and application settings.
If you have a good architecture on this, please share with me. I am not sure whether my questions are clear enough or not. I also will be reading some articles about these. Thank you in advance.
I have recently been looking at a database and wondered if anyone can tell me what the advantages are supporting a unique collumn, which can essentially be seen as the primary key, with an identity seed integer primary key.
For example:
id [unique integer auto incremented primary key - not null], ClientCode [unique index varchar - not null], name [varchar null], surname [varchar null]
isn't it just better to use ClientCode as the primary key straight of because when one references the above table, it can be done easier with the ClientCode since you dont have to do a lookup on the ClientCode everytime.
Hi, I need a sequential primary key to store a document number. Is IDENTITY(1,1) the best way to achieve this on SQL Server 2000? If I have a great number of users on my application doing an insert on the DB at the same time, could this lead to any problem like it trying to insert the same PK for some of them? Can this happen, and if so what can I do to prevent this? I've been reading about NEWSEQUENTIALID() but it's only avaiable on SQL Server 2005 and i'm still using 2000. Another question.. should I avoid using NEWID() and use IDENTITY instead? I've been reading that newid() lowers the system performance because all values are non sequential. Thanks!