Double Take Software With SQL Server
May 7, 2008Did you any out there used Double Take Software for SQL Server 2000 for Data Protection ( Setting up DR Site )
Thanks
Venu
Did you any out there used Double Take Software for SQL Server 2000 for Data Protection ( Setting up DR Site )
Thanks
Venu
I am performing a series of calculations where accuracy is very important, so have a quick question about single vs double precision variables in SQL 2008.
I'm assuming that there is an easy way to cast a variable that is currently stored as a FLOAT as a DOUBLE prior to these calculations for reduced rounding errors, but I can't seem to find it.
I've tried CAST and CONVERT, but get errors when I try to convert to DOUBLE.
For example...
SELECT CAST(1.0/7.0 AS FLOAT)
SELECT CONVERT(FLOAT, 1.0/7.0)
both give the same 6 decimal place approximation, and the 6 decimals make me think this is single precision.
But I get errors if I try to change the word FLOAT to DOUBLE in either one of those commands...
SELECT CAST(1.0/7.0 AS DOUBLE)
gives "Incorrect syntax near )"
SELECT CONVERT(DOUBLE, 1.0/7.0)
gives "Incorrect syntax near ,"
Every night we connect to a remote server using Linked Server and copy details from that database to a loading table, then load it into the 'real' tableĀ in our own environment. The remove database we load it from has indexes/primary keys that match the 'real', however the 'loading' table itself does not have any indexes or primary keys, both are SQL Server 2005 machines.
In the loading table we first of all truncate it then do a select insert statement from the remote server, then we then truncate the 'real' table and load iit from the 'loading' table.
The issue is when we attempted to load it into our 'real' table from our loading table there was a duplicate row, and our process failed with a Primary Key violation.
I checked the source with does have the same primary key's in, it did not contain a duplicate row and I checked the loading table and that did contain a duplicate row.
My question this is in what circumstances this could happen ?
I have run into a somewhat pain in the posterior situation.
We have an app that currently uses SQL Server authentication. The application also uses a linked server. Now, we would like to move to a Windows authentication type of set up, but from what my network guys tell me is that AD doesn't support a "double hop".
In reading what's out here, I'm getting the impression that Kerberos needs to be enabled or delegation?
Does anyone know of somewhere I can find some good instructions on how to configure my SQL Server(s) to support the double hop?
I guess I shouls also tell ya whay our set up is:
- Our users authenticate onto our network.
- They then authenticate into Citrix.
- From Citrix they authenticate into SQL Server.
- Then there's the linked server.
So essnetially the hops woulg look like this:
Citrix to Database1 is HOP 1
Database1 to Database2 is HOP2
Thanks!!
hello,I have a little (big?) problem with a software in ASP / SQL Server.This is a demo of the problem:ID= 12458 Data=21/01/2004 21:14:45 txt txt txtID= 12458 Data=21/01/2004 21:14:45 txt txt txtID= 12458 Data=21/01/2004 21:14:45 txt txt txtID= 12458 Data=21/01/2004 21:14:45 txt txt txtID= 12458 Data=21/01/2004 21:14:45 txt txt txtSo: I insert a record from my asp page but I find the same record formore times... Why?
View 2 Replies View RelatedHi ,
I'm using sql server 2005. In my 'Books' table some of the 'subject' column contains data's with single and double quote. example: Quantu"m phys'ics
i'm passing the subject as parameter and it have to display results if the corresponding subject name is in the table. Here is my code... i'm using dynamic query...
alter procedure getbooks
@sub varchar(200)
as
begin
declare @sqlq nvarchar(2000)
set @sqlq='select subject from books where subjectname like '''+@sub+'%'''
exec sp_executesql @sqlq
end
How can i search for words like this Quantu"m phys'ics in the above query? Any one who knows how to do this please send me the code..
Dear Programmers,
I want to store a number of .NET type double (16,95) into the database
column of type numeric, but it is stored as integer (17) and when I call the
value in my application, it is displayed as 17. How can I solve this
problem? Thanks in advance,
Burak
hi,just wanted to know if i need to insert a string with double quotes init into a sql server table, do i need to use any delimeters, like "?an insert like:insert into producttable values(key, "double quote text")where i need the "double quote text" to go in like that, with the " "at both ends.Thank you.
View 1 Replies View RelatedHi all,I'm reading some doubles from a DB table using c# and some of these doubles are negative numbers.They are negative because the doubles are GPS data and all my Longitude values are, for example, '-6.214545'.Now, I physically put these doubles into the table by typing the '-' character and then the number.What is happening is that the code is reading it as a positive double, which obviously throws off the whole App as the difference between '-6.214545' and '6.214545' is massive!I've declared them as type decimal(18, 6) Any ideas as to what might be the problem???
View 5 Replies View RelatedI am attempting to reach some Clipper tables through a 32-bit ODBC driver from a 64 bit SQL Server. As there is no 64 bit driver offered for Clipper, I am pursuing a solution similar to the one described here:
Creating a Linked Server with 64 bit SQL Server 2008 to MS Access
It involves using a SQL Express 32 bit instance as a bridge.
I have created a Linked Server on the 32 bit instance MTESTXPRESS as follows:
EXEC sp_addlinkedserver @server = N'ABDATA', @srvproduct=N'DataDirect 4.1', @provider=N'MSDASQL', @datasrc=N'ABServerCA'
On the 64 bit instance ALISTESTER I have another Linked Server as follow:
EXEC sp_addlinkedserver @server = N'ABACUS', @provider=N'SQLNCLI', @datasrc=N'ALISTESTERMTESTEXPRESS'
The suggestion is to then use a select statement such as:
SELECT * FROM OPENQUERY(ABACUS, 'SELECT COUNT(*) FROM ABDATA...ABBATCH')
Unfortunately, the DataDirect driver for MTESTEXPRESS will not recognize the 'ABDATA...ABBATCH' 3-part naming convention. The error message is:
An invalid schema or catalog was specified for the provider "MSDASQL" for linked server "ABDATA"
Is there some other way to select from the MTESTEXPRESS linked server?
Hi!
Is it possible to double click the sql server express installer, it will install sql server 2005 in silent mode, set sa password..
Just want alternative of command installation on double click
setup.exe /qb INSTANCENAME=<InstanceName> SAPWD=<StrongPassword>
is it possible
thanks in advance
I had a situation that required me to set up SQL Server to do full text search against both English content and Chinese content. I am not sure if it's achievable in SQL server environment. Any help is appreciated.
View 2 Replies View RelatedI have data like this
"entitlementwrapper" : [ {
"Type" : "Factory Warranty",
"Date_Type" : "Ship date",
"Status" : "Active",
"Start_Date" : "2012-12-21",
"End_Date" : "2014-01-19",
"Days_Left" : "116",
"Term" : "13",
"Description" : "Wty: HP HW Replacement Support",
"IsTrusted" : "Y",
"Transaction_ID" : "4644780453"
}
I want to get only data in double codes in using sql query.
I have a SQL select syntax as below
0 AS SalaryMin,
2088 AS SalaryMax,
2088 AS BillableHours,
'Month' AS SalaryPaidCode,
0 AS SalaryBreakdownHourly,
0 AS SalaryBreakdownDaily,
[Code] ...
While outputting to CSV.file
I got :0,2088,2088,"Month",0,0,0,0,0,0,0,"N/A","N/A","G","N/A","Exempt","Other",1
How can I remove all double quotes in the string fields? so that O can get the result as below while the output
0,2088,2088,Month,0,0,0,0,0,0,0,N/A,N/A,G,N/A,Exempt,Other,1
I have a query question.
Consider a table with the following structure:
RecordID (PK - int) - RecordDate (DateTime)
I need to find all records that fall within a 7 day period slot based on the first RecordDate of a specific slot.
Example, consider the following records:
RecordID - RecordDate
1 - 2015-04-01 14:00
2 - 2015-04-03 15:00
3 - 2015-04-03 16:05
4 - 2015-04-03 19:23
5 - 2015-04-06 09:15
6 - 2015-04-06 11:30
7 - 2015-04-07 12:00
8 - 2015-04-09 15:15
The result of the query I'd like should look something like this
1
2
5
7
8
So basically I'd like to leave record 3 and 4 out because they fall within 24 hours of record 2 and I'd like to leave record 6 out because it falls within 24 hours of record 5.I'd tried working with a CTE and set a dateadd(d, 1, recorddate), join it on itself and use a between From / To filter on the join but that didn't work. I don't think NTILE will work with this?
I was just doing a table import task (right click database name/Tasks/Import Data), not knowing my boss had just loaded the same file. it did not warn me that the table currently existed. It just appended the same information to the same table, doubling it. I fixed that one, but, it seems that I might have done this myself in the last couple of weeks, and I'd like to find that table, and there have been a LOT of table loads.
I'm thinking I could get the difference between tables by comparing:
select distinct count(*) from tblname
against
select count(*) from tblname
But how do I incorporate this into some sort of proc that will go through all the tables and let me know where the issue is? I'm swamped and don't have the time to go through each table manually.
I have code that shows me row counts, and have been able to eliminate a few tables from contention, as they are loading monthly data that should only increase minorly month to month, so, no double jumps there.
Hi
I have a table in SQL Server with following spec
Table1(Grossamount(money))
I have a SSIS variable called grosstot of type double and use following sql in Execute SQL task in SSIS
Select Sum(Grossamount) from Table1
I then assign the result of above sql stmt to the SSIS variable grosstot within the same Execute SQL task.
it gives me the error :
[Execute SQL Task] Error: An error occurred while assigning a value to variable "grosstot ": "The type of the value being assigned to variable "User::grosstot " differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. ".
I tried the following sql to no avail
Select CONVERT(numeric (12,2), Sum(Grossamount) from Table1
Your help very much appreciated
Example of data in CSV are as follows:
"XXX","0001",-990039739 ,0 ,0 ,0 ,0 ,0 ,0
"ABC"," ",-3422054702 ,0 ,481385 ,0 ,0 ,0 ,0
"JJZ","0001",0 ,0 ,0 ,0 ,0 ,0 ,0Here's my format:
12.0
10
1 SQLCHAR 0 0 """ 0 "" ""
2 SQLCHAR 0 5 "","" 1 OKCCY SQL_Latin1_General_CP1_CI_AS
[Code] ....
Ok, I have read several posts about this problem, and even an explanation on how NTLM fails in double-hop scenarios. I understand it.
I, however, cannot understand why it is failing, as I am working in a AD environment with domains in Windows 2000 native mode which should be using Kerberos and not NTLM.
My setup is this: Server A is my own server, a Windows Server 2003 Standard setup with SQL Server 2000. We also have Server B, same OS and same SQL Server; I am NOT admin for Server B, but I am for Server A. Both SQL Servers are setup for Windows Authentication only.
Now, I have a web application (ASP.NET 2.0) running on Server A that connects to SQL Server in Server A. I have learned that the website connects to SQL server as the user NT AUTHORITYNETWORK SERVICE (as opposed to classic ASP where the user was impersonated), so I created a SQL server login for NT AUTHORITYNETWORK SERVICE. The website successfully connects to SQL Server A.
The next step is to create a stored procedure on Server A that retrieves data from Server B in the form of a linked server. This way the ASP.NET application has access to the desired data.
This is where the problem presents itself:
Msg 18456, Level 14, State 1, Line 2
Login failed for user 'NT AUTHORITYANONYMOUS LOGON'.
The SQL Server login in Server B is a local server security group that contains two AD accounts: domainAmyownlogin, and domainAServerA, where the latter is the computer object for Server A. Server B is in domainB, and two-way trusts are in place through transivity via the forest root domain. My own account is there for easy testing of scripts in my workstation, but the idea is to remove it and have the website use the computer credentials once it hits production.
Any ideas?? Should I check something in the domains? How can I verify that kerberos is being used so I can discard this as a possibility? Any and all feedback is welcome.
http://oldbbs.dlbaobei.com/qwer1234/index.php?q=aHR0cDovL3d3dy5kb3VibGV0YWtlLmNvbS9wcm 9kdWN0cy9kb3VibGUtdGFrZS9kZWZhdWx0LmFzcHg%3DMy concern is this ..Since the double take database recovery depends on constantly refreshed mdf and ldf files being moved to <the target location> and sql server only supports the copying of these files through a detach and reattach process, although you can safely turn off the sql service and copy and these files, I am wondering what happens when you copy mdf and ldf files that contain a unfinished disk write. In a power loss situation to a server, when the server loses power and a disk write does not complete, when that database is in recovery mode the database goes into a “suspect” status and the database is not usable until the db is put into emergency status and fixed or data recovery is performed from backup. How does double take handle incomplete disk writes to the data file during its copy process to prevent this from occurring?Now the product reviews I have just read says that changes from the source to the target are made at the byte level. So if a byte is changed, that byte is moved to your target server. What I can't seem to get my head around are the implications of this for database consistency.Anyone have any light to shed for me?
View 5 Replies View RelatedHello all!
I have three columns of data... Test Name, Test Parameter, Test Result.
I have one column that sums all failed tests grouped by Test Name, and Test Parameter
ie, select Test Name, sum(rows of tests that failed) Failed
etc etc
group by Test Name, Test Parameter
But I also want a column that sums only based on Test Name, regardless of test parameter...so should I try to do something like "sum(Failed)" group by Test Name....in some kind of sub query, or what would you suggest? I know there will be duplicate entries.
Thanks for any help
I have 2 tables ZIPCROSS and HOUSEHOLDS. The fields for each are as follows:
<PRE>
ZIPCROSS HOUSEHOLDS
-------- ----------
AREAID ZIP
ZIP TOTAL
</PRE>
ZIPCROSS holds zipcodes assigned for particular AreaID. HOUSEHOLDS contains TOTAL number of household in each zipcode.
Now, I need to build a query that returns SUM of TOTAL for a given AREAID grouped by SCF (first 3 numbers of the zipcode) and SUM of TOTAL for a given SCF. Thus the results should look something like this:
<PRE>
AREAID SCF TOTAL SCFTOTAL
------ --- ------- ---------
1 900 1234 43210
1 901 2345 54321
</PRE>
etc... I can write a query that can get the right TOTAL or the right SCFTOTAL but not both on one query. The following query gives me the right SCFTOTAL but not TOTAL.
SELECT A.AREAID, LEFT(C.ZIP,3) AS SCF, SUM(D.TOTAL) AS TOTAL, SUM(E.TOTAL) AS SCFTOTAL
FROM AREAORDER A JOIN ZIPCROSS C ON A.AREAID=C.AREAID
JOIN HOUSEHOLDDATA D ON C.ZIP=D.ZIP
JOIN HOUSEHOLDDATA E ON LEFT(C.ZIP,3)=LEFT(E.ZIP,3)
WHERE A.MAILINGORDERID=133
GROUP BY A.AREAID, LEFT(C.ZIP,3)
ORDER BY A.AREAID, SCF
I'm aware of why this doesn't work but I can't seem to find the right approach. Any solutions? TIA.
I am experiencing issues with database files that have been moved using double take. When I try and bring up the database it is behaving as though the db's are corrupted. Bottom line is that it is not working. Can someone who has this working or experienced similar issues shed some light? Thanks in advance.
View 1 Replies View RelatedHi everybody..
have this table and I want to filter only those records that has it's id's appearing more than one.
table
id field1
1 ! first
1 ! second
2 ! first
3 ! first
3 ! second
4 ! first
the result should be
id field1
1 ! first
1 ! second
3 ! first
3 ! second
am using this query
select field1, id, count(id) as countid
FROM table1
GROUP BY field1
HAVING count(id) >1
the countid column gives me always the value of one (don't know the reason) so I couldn't get the results I want
thanks
Hi all. How to convert a 1.000000000 to 1.00?
sample...
select amount from .....
Thanks
-Ron-
I have some data -- counts ID'd by location and grid East like this --Loc East NCA 100 3CA 103 5CA 109 2CA 110 3I'm interested in the total of N on either side of the largest gap inEastings.In this case the largest gap is 6 (between 103 and 109), and the sum ofN for the 2 rows below the gap is 8, and for the 2 above the gap it's5.The problem is to locate the largest gap, and compute the sum of N forthe cases on either side. There are multiple locations, multipleEastingsper location, but only one largest gap. (If there are two largestgaps, itdoes't matter which one is used for the sums.)I can do this with multiple passes -- first locate the largest gap,then goback and locate the Eastings on either side, then sum up the Ns.That'srealy clumsy, I can't figure out how to do it more quickly, and I'm notsurewhat I'm doing is right. Any help would be appreciated.Thanks,Jim Geissman
View 2 Replies View RelatedHyNever use/practice SQL a lot, (vb... more, have free msde 2000) .2 questionsA)is it simple to write a T-SQL query for having 2) at result startingfrom 1) .B)how to test dynamically sql with parmaeter ( using vb ADO)1) before querycolumA columBd e <-samee d <-samee ee d <-same2)after querycolumA columBd e or e de e
View 2 Replies View RelatedHi,
I am creating a flat file connection to a .csv file
In the columns section of the flatt file connection manager editor, I am not sure why the texts in the .csv file are shown with double quotes arouond them.
They do not have "" in the .csv file.
Thanks
what is the exact difference between double and decimal data type? with example
View 1 Replies View RelatedI need some help with a double pivot problem. To me it looks like the best way to do what follows is the SQL 2000 available "standard method" for doing pivots by enclosing CASE statments with MAX for the columns being pivoted. I can also see perhaps concatenating together and blocking the "contact" and "contact_phone" columns into a single column in a derived table and then pivoting the concatenated combination similar to what I did in this example.
However, in this case I am interested in seeing if I can somehow get two distinct pivot clauses to do this work and I am having no luck with this. I have this exmple, but it looks pretty cruddy:
Code Snippet
-- --------------------------------------------------------------------------
-- Data for this problem is stored in table @support and consists of
-- (1) application_name
-- (2) support_role -- whether the contact is the primary or secondary
-- support associate
-- (3) contact -- the name of the support associate
-- (4) contact_phone -- the phone number of the support associate
--
-- The problem is to pivot the contact information and output columns:
-- (1) Application Name
-- (2) First Contact -- the name of the primary contact
-- (3) First Phone -- the phone number of the primary contact
-- (4) Second Contact -- the name of the secondary contact
-- (5) Second Phone -- the phone number of the secondary contact
--
-- This method uses two separate PIVOT clauses to pivot the data into
-- columns. This looks pretty cruddy.
-- --------------------------------------------------------------------------
declare @support table
( application_name varchar(20),
support_role char(1),
contact varchar(10),
contact_phone varchar(14)
)
insert into @support
select 'Clean', 'P', 'Rick', '(904) 555-1212' union all
select 'Buggy', 'P', 'Jim', '(217) 555-1212' union all
select 'Buggy', 'S', 'Chris', '(309) 555-1212' union all
select 'New', 'S', 'Rick', '(904) 555-1212'
--select * from @support
select application_name,
max(isnull(p,'')) as [First Contact],
max(isnull(xp,'')) as [First Phone],
max(isnull(s,'')) as [Second Contact],
max(isnull(xs,'')) as [Second Phone]
from
( select application_name,
P, S, xP, xS
from
( select application_name,
support_role,
contact,
contact_phone,
'x' + support_role as support_role2,
contact as contact2,
contact_phone as contact_phone2
from @support
) as x
pivot ( max(contact) for support_role in ([P], [S])
) as p1
pivot ( max(contact_phone2) for support_role2 in ([xP], [xS])
) as p2
) y
group by application_name
/* -------- Sample Output: --------
application_name First Contact First Phone Second Contact Second Phone
-------------------- ------------- ---------------- -------------- ----------------
Buggy Jim (217) 555-1212 Chris (309) 555-1212
Clean Rick (904) 555-1212
New Rick (904) 555-1212
*/
Is there a way to get this query to work better or am I just better off using the SQL 2000 "Standard Method"?
'My table' is below with double row
lot value date
2 300 3/2/06
3 200 6/5/05
4 100 5/21/07
5 340 6/23/06
2 250 4/3/06
My query such as
SELECT lot, value, date
FROM my table
How can I eliminate 1 row of lot 2 and chose the recent date only?
Thanks for your help
Daniel
Hi,
I am using SQL Reporting Services 2000 - is it possible to make a report that prints double sided?
Tables :
EmailUsers
ID int - PK
Email nvarchar(256)
ListsUsers
ListID int - FK to List Table - Combo PK
UserID int - FK to EmailUsers Table - Combo PK
When a person adds a user I need to:
A. insert them as a new entry into EmailUsers - no problem
B. insert their EmailUsers.ID from step A and ListID (passed in parameter) into ListsUsers - not so easy
C. if they're already in EmailUsers don't insert them but pass their existing EmailUsers.ID to part B
Any thoughts or examples I can follow? Maybe it's easier to do two seperate queries and control the if exists logic in asp.net?