Hey guys...i've got a problem wif my stored procedures...in which my page keep repeating the same data...so to counter this problem i use SELECT DISTINCT instead of just SELECT..but the problem is when i change SELECT into SELECT DISTINCT...the page will not be display,page error....for your information my stored procures was auto generate from some security sofware...so can guys help me out with my code...
FROM #Vulns v
INNER JOIN TargetHost t (NOLOCK)
ON v.TargetID = t.TargetID
INNER JOIN (SecurityChecks sc (NOLOCK)
LEFT OUTER JOIN Remedies r (NOLOCK)
ON sc.SecChkID = r.SecChkID)
ON v.SecChkID = sc.SecChkID
INNER JOIN ObjectView o (NOLOCK)
ON v.ObjectID = o.ObjectID
LEFT OUTER JOIN SensorData1 s WITH (NOLOCK, INDEX(SensorData1_AK3))
ON v.ObservanceID = s.ObservanceID
AND s.Cleared = 'n'
LEFT OUTER JOIN SensorDataAVP a (NOLOCK)
ON s.SensorDataID = a.SensorDataID
AND a.AttributeValue IS NOT NULL
AND a.AttributeValue != ''
UNION ALL
What is the best way to insure 100% availability of our SQL databases? Should we use clustering, mirrored servers (Vinca or Double-take)? What are other organizations doing about this?
After some feedback from some of you who might have done something similar:
We are going to be having 3 SQL servers (running SQL Std licence). 2 Are live 1 is a hot swap in the event that we have a total loss of either of the SQL boxes. Basically what I am wanting to do, is have the hot swap being updated periodically so that the databases are being replicated on this box, so that if the live one fell over, we could very quickly get the hot swap into take over.
Can anyone offer any perspectives on the best method of attack for this?
My company has valuable data being constantly logged to a database 24/7. Actually, the data logs to two separate database systems in case one goes down for whatever reason such as power failure, routine maintenance, glitch, whatever. These two systems are housed at different physical locations.
We are planning to set up a third database system which polls the other two every minute. It compiles and mantains a deduped, clean, and complete collection of data from the other two database systems. If one of the first two systems goes down or misses a row, the third will automatically get the data from the other system. If this deduped database system goes down, we can rebuild the data from the other two.
Does this seem like the right route to take or is there a simpler or safer route available?
Hey Guys,We have a merge replication on 2 MsSQL servers and wondering if its possible to allow the 2 connection strings in our application?So if SQL Server 1 doesnt responed, then the other one will take over.we currently have this in our global.asax:Public Shared dbConnString as String = "Server=xxxx;Initial Catalog=xxxx;User Id=xxxx;Password=xxxx;" So is it possible to change this to look up the first server and then connect to a second server if needed?Thank you for any advice at all.
I have a table that I want to have a precalulcate length on a character fieldand group and sum up. Thought I could do this by creating a view with a groupby clause that includes the sum function. Unfortunately, the compilercomplains with:A clustered index cannot be created on the view 'MyView' because the indexkey includes columns which are not in the GROUP BY clause.Wish I could verbalize the problem a little better, but the following pareddown example should serve as a demonstration:SET ANSI_WARNINGS ONSET ANSI_PADDING ONSET ANSI_NULLS ONSET ARITHABORT ONSET CONCAT_NULL_YIELDS_NULL ONSET QUOTED_IDENTIFIER ONSET NUMERIC_ROUNDABORT OFFGOCREATE TABLE myTable(myID INT NOT NULL,RecNum INT NOT NULL,TestString VARCHAR(80) NOT NULL)GOINSERT INTO myTable VALUES(1, 1, 'a')INSERT INTO myTable VALUES(1, 2, 'ab')INSERT INTO myTable VALUES(2, 2, 'abc')GOCREATE VIEW dbo.MyView WITH SCHEMABINDING ASSELECTmyID = myID,slen = SUM(LEN(TestString)),recn = COUNT_BIG(*)FROM dbo.myTableGROUP BY myIDGOCREATE UNIQUE CLUSTERED INDEX IX_MyView ON MyView(myID, slen)-- A clustered index cannot be created on the view 'MyView' because-- the index key includes columns which are not in the GROUP BY clause.GODROP VIEW MyViewGODROP TABLE myTableGOThanks,Chris Rathman
Ok, at this point I have the reader reading the tables data in a loop while it's not empty. During the gathering of each row of data, I was wondering if it was possible to do a next row once I've reached a certain column. The main users table has just the one user, but it's relationship table has a couple family members. I was hoping someone could show me how to make it so that the one user and all his related family members will print out to a label. while (reader.Read()) { string usr = reader["UserName"].ToString(); usr = usr.TrimEnd(); string pss = reader["Password"].ToString(); pss = pss.TrimEnd();
if (usrNmeLbl.Text == usr) { if (psswrdLbl.Text == pss) { //read the column from the reader and cast it to String as some may contain null values usrNmeLbl.Text = reader["FirstName"].ToString() + " "; psswrdLbl.Text = reader["LastName"].ToString() + "<br />"; psswrdLbl.Text += "Place of Birth: " + reader["BirthPlace"].ToString() + "<br />"; psswrdLbl.Text += "<img src=" + reader["Photo"].ToString() + " />" + "<br />"; Label4.Text = "Your relatives: " + "<br />"; Label4.Text += reader["Relation"].ToString() + ": "; Label4.Text += reader["RelativeFN"].ToString() + reader["RelativeLN"].ToString(); Label4.Text += reader["Relation"].ToString() + ": "; Label4.Text += reader["RelativeFN"].ToString() + reader["RelativeLN"].ToString(); } If I grab the Relation table data again, it's not cycled to the next relative. I was hoping that it would, but it's not. So I'm wondering if there was something that could be added to the second set. Label4.Text += reader["Relation"].ToString() + ": "; Label4.Text += reader["RelativeFN"].ToString() + reader["RelativeLN"].ToString();
Today a vendor bluntly stated that VMWare provides the same failover and redundancy for SQL that would render "AlwaysOn" high availability unnecessary.
Essentially that VMWare would detect a problem and failover and have .9999 uptime .
Hi, this is my first post and I'm relatively new to SSIS so please go easy on me.
Without going into too much detail about it, I've set up a simple SSIS package which does this in a nutshell:
Foreach loop picks up all *.xls files in a given folder 1 - Puts the name of the current spreadsheet into a variable 2 - File System Task copies the current spreadsheet ("abc.xls") to a file called "work.xls" 3 - Data Flow task performs data extraction on "work.xls" and puts it into a SQL server database 4 - File System Task moves "abc.xls" into a "success" folder Continues with loop - move onto next spreadsheet
This works fine, so long as the spreadsheets all have the same number of columns.
As soon as one of them has a column missing (believe me, this will happen - we're dealing with users here) the package falls over at step 3.
When the package comes across an erroneous spreadsheet, what I'd like to do is move the offending file to a failure folder (making step 4 either a success or failure file move) and carry on with the next one.
I know that you can have an error path (the red line) from any step within the dataflow task, but this doesn't help me because the error lies in the structure of the spreadsheet and not the contents.
I've already come up with a work around whereby each file is moved into the failures folder just after step 2, then moved from the failures folder into the success folder at step 4.
This almost gives me what I want, although of course the package still falls over whenever it encounters a dodgy looking spreadsheet.
Is there any way that I can get the package to do what I'm after?
We are using Office 2010 on Citrix with the RCO Master Data Excel addin. I am getting many calls from users that the Addin has disappeared from the Excel menu. Can re-enable it by going to files->Options->Addins and manage COM addins. Have not been able to work out why it becomes disabled. Any registry setting that I can use to stop it from being disabled?
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History' MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
Hi All I am using SQL server Database in one of my table there is a column which is set to Identity=Yes i.e., The ID is increment by one on every insert and if the insertion failed then the id generated goes off then in the next generation it uses new id ..........EXfirst insertion id=1 then in the second insertion if while adding data to other rows if i get some error then the id 2 is not used and when i correct the error and insert it then id=3? can any one give me the solution for this and NextWhen i delete the datafrom the table see the ids are upto 20 and i delete all the records from the table after insertion of new record the id will be 21plese help me in this
I have a snapshot replication is running and now I want to stop the replication for a while. Is it possible to do that? If it is then where I can set to stop it? Please help.
I use EM to handle 2 SQL servers. One I can `stop`; the other I can`t. (except I think I used to be able to do so).
When I select the `stop` I get the following message from EM:
"An error 1051 - (A stop control has bee sent to a service which other running services are dependent on) occurred while performing the service operation on the MSSQLServer service."
How do I track down what this other running service is? How do I stop SQL?
I wanted to remove my Northwind database. But that database is currently used for replication. I'll have to stop the replication first before I can remove it.
Hi, Can anybody tell me how to stop the execution of a T-SQL statement at once? I have tried Alt+Break but its taking a long time to stop.Whats the reason?Plz suggest....I am dealing with a database containing 24343000 data. Joydeep
I have a statement that has been running great for the past hour but now it will not pull the info any longer and just gives me a null
DECLARE @Text VARCHAR(2000) SELECT @Text = COALESCE(@Text + '', '') + x.memotext FROM (SELECT TOP 100 PERCENT memotext FROM customermemoheader WHERE memonumber = 'TERMS' and customernumber = '0009' ORDER BY seqnumber) AS x
SELECT @Text AS MemoText
I have verified in the tables that the info is there by running the select statement from within the (). It has worked for 8000 records and now it no longer works. Any help would be much appreciated.
I have a performance issue with a Cognos report against SQL Server 2005. The total running time of the report is 1 minute 15, and using SQL Profiler I found out that 1m13 is spent preparing SQL. Execution and generating the report takes up 2 seconds; no problem there. The SQL is the same every time I run the report, yet SQL Server spends 1m13 preparing it every single time! I'm no DBA, but as far as I understand that's not what's supposed to happen; once prepared, the SQL should execute quickly every time.
Is there a way to stop SQL from preparing the statement every time?
(Cognos 8 against SQL Server 2005 through OLEDB. Oh, and this query takes about a second when run in EM.)
Hi,I am doing some resource hungry tasks (some extraction and loadingthrough DTS), for which each time the SQL Server Log files gets filledup!Is there any way to stop the logging (like as during restore)?Thanks in advance.-surajit
i'm desperate! I have a application in my SQL Express and only one database for it. But right now it have 3,97gb, without log file. i'm already buying a SQL Enterprise to upgrade this, take some days, but my question is: when this size came to 4,00gb my application will be stop working ? my business stop ???
Error: 0xC0047039 at Load work$$MyDataFile from flat file, DTS.Pipeline: SSIS Error Code DTS_E_THREADCANCELLED. Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown. There may be error messages posted before this with more information on why the thread was cancelled.
Error: 0xC0047021 at Load work$$MyDataFile from flat file, DTS.Pipeline: SSIS Error Code DTS_E_THREADFAILED. Thread "WorkThread0" has exited with error code 0xC0047039. There may be error messages posted before this with more information on why the thread has exited.
I did not request a shutdown! Someone please give me a global switch to tell SSIS to keep going. Why should I not load 20,00 records because of one bad record?
I have tediously changed every field on every tool to "Ignore Error". Why should I still have to do this? I mean why does SSIS default to failure? Is there some switch I could change in the XML that would change this default behaviour?
The problem with the data in this case: a NewLine character was missing so it looked to the data flow as one very long line.