What Solutions Make Returning From A Disaster Easier?

Apr 18, 2007

I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.

I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.

We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.

Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?

View 5 Replies


ADVERTISEMENT

There Must Be An Easier/better Way...

Oct 26, 2005

I have a database with table representing city blocks, houses, and people.  On my main aspx page, I have a datagrid which displays a list of the blocks, followed by a count of the houses in each block, followed by a count of the people in each block.  Right now I have something that works, but it is awful.  Every time the page loads it makes a ton of connections to the database, and I have convoluted spaghetti code.  Now I need to add more functionality, but I can't rightly do that until I find a more efficient way to do this.Step one.  The program connects to the database and gets a list of the blocks using the statement "SELECT blockid FROM blocks"Step two: The program iterates through each blockid in the list and executes the statement "SELECT houseid FROM houses WHERE blockid = (whatever)"Step three: The program counts the rows returned from step two to determine the count of how many houses are in that block. Step four: The program iterates through each houseid from step two and exectues the statement "SELECT COUNT (personid) FROM people WHERE houseid = (whatever)" the result is added to a variable that keeps a running count.Step five: the final value of the variable in step four is the number of people in that block.My question for you is, how can this be done more efficiently?  Can I group together some awesome SQL statement that will get these counts?  I thought about doing something like "SELECT blockid (SELECT COUNT houseid FROM houses WHERE blockid = something) as HouseCount" but I can't figure out how I could take the value in the first column (blockid) and pass it to the inner select statement.Any thoughts on how to make this better?  Below is the full code for my function, in case you want to examine in more detail.  Also, I am in the process of changing the select statements into stored procedures, so don't beat me up too badly over that bit of ugliness in my function.  Thanks.Private Function GetBlockDataSet() As DataSet        Dim myconnection As SqlConnection        Dim objDataAdapter As SqlDataAdapter        Dim query, connectionstring As String        Dim tempDS As New DataSet        Dim houseDS As New DataSet        Dim peopleDS As New DataSet        Dim DC1 As New DataColumn        Dim DC2 As New DataColumn        Dim i, j, peoplecount As Int32        Dim DR, DR2 As DataRow
        query = "SELECT blockid FROM blocks"        connectionstring = configurationsettsing.appsettings("ConnectionString")
        myconnection = New SqlConnection(connectionstring)        objDataAdapter = New SqlDataAdapter(query, myconnection)        objDataAdapter.Fill(tempDS, "BlockList")
        DC1.DataType = System.Type.GetType("System.Int32")        DC2.DataType = System.Type.GetType("System.Int32")        DC1.ColumnName = "HouseCount"        DC2.ColumnName = "PeopleCount"        tempDS.Tables("BlockList").Columns.Add(DC1)        tempDS.Tables("blockList").Columns.Add(DC2)
        i = 0        For Each DR In tempDS.Tables("BlockList").Rows            query = "SELECT houseid FROM Houses WHERE blockid = '"            query &= tempDS.Tables("BlockList").Rows(i).Item(0)            query &= "'"            objDataAdapter = New SqlDataAdapter(query, myconnection)            objDataAdapter.Fill(houseDS)
            tempDS.Tables("BlockList").Rows(i).Item(1) = _                houseDS.Tables(0).Rows.Count            tempDS.Tables("BlockList").Rows(i).Item(2) = 0            j = 0            peoplecount = 0            For Each DR2 In houseDS.Tables(0).Rows                query = "SELECT COUNT (personid) FROM people WHERE HouseID = '"                query &= houseDS.Tables(0).Rows(j).Item(0)                query &= "'"                objDataAdapter = New SqlDataAdapter(query, myconnection)                objDataAdapter.Fill(peopleDS)                peoplecount += peopleDS.Tables(0).Rows(0).Item(0)                j = j + 1                peopleDS.Clear()            Next            tempDS.Tables("BlockList").Rows(i).Item(2) = peoplecount            houseDS.Clear()
            i = i + 1        Next
        GetBlockDataSet = tempDS        ' Here comes the garbage collection        myconnection.Close()        myconnection.Dispose()        myconnection = Nothing        objDataAdapter.Dispose()        objDataAdapter = Nothing        tempDS.Dispose()        tempDS = Nothing        houseDS.Dispose()        houseDS = Nothing        peopleDS.Dispose()        peopleDS = Nothing        DC1.Dispose()        DC1 = Nothing        DC2.Dispose()        DC2 = Nothing
 
    End Function

View 5 Replies View Related

Can I Do This An Easier Way?

Apr 20, 2007

This statement returns date formatted 'yyyymmdhhnn'. But there has to be an easier way. Can someone help?

DECLARE @dt datetime;
SELECT @dt = GETDATE();
SELECT CONVERT(varchar(40),@dt,112) +
RIGHT('0' + CAST(DATEPART("hh", @dt) AS varchar(2)), 2) +
RIGHT('0' + CAST(DATEPART("mi", @dt) AS varchar(2)), 2) AS isodt;

Thank you.

View 4 Replies View Related

Is There Any Easier Way Than Using This Cursor?

Feb 20, 2008



Hello,

I just finished a lengthy process (for me) of writing this cursor that gets each row from the Fact table, and creates a running total of the billable hours (hours from rows where BillableType = 1). It starts over when the month changes, the year changes, or the employee changes.

I did this again for a running total of billable hours for the year.

I put these in 2 stored procedures and run them in an Execute SQL control flow task in my SSIS package. It seems like SSIS is designed to make this kind of procedure simpler, and I'm wondering if I'm doing this in the best way, or if there's a more efficient way to do this using tasks inside SSIS. Can anyone advise? Any help is greatly appreciated.

Best,
Andy




Don't feel obligated to read this cursor if you understand the problem above. It's not color coded because when I copied it from SQL I lost the tabbing on the case expressions.





Code Snippet

CREATE PROCEDURE YearTotalHours AS

DECLARE @FactBillingId Int;
DECLARE @dt DateTime;
DECLARE @EmpKey Int;
DECLARE @YearTotalHours Float;
DECLARE @NewYearTotalHours Float;
DECLARE @NewYear Int;
DECLARE @BillableType Int;


DECLARE FactBillingFinalRows CURSOR FOR
SELECT FactBillingId
FROM FactBillingFinal

OPEN FactBillingFinalRows

FETCH NEXT FROM FactBillingFinalRows
INTO @FactBillingId

WHILE @@FETCH_STATUS = 0
BEGIN




SET @EmpKey = (SELECT EmployeeKey FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
SET @dt = (SELECT dt FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
SET @BillableType = (SELECT BillableTypeKey FROM FactBillingFinal WHERE FactbillingId = @FactBillingId)
SET @NewYear = (SELECT Year(@dt))
SET @NewYearTotalHours =


CASE
WHEN @FactBillingId = 1 THEN
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
ELSE
CASE
WHEN @EmpKey <> (SELECT EmployeeKey FROM FactBillingFinal WHERE FactBillingId = @FactBillingId - 1)
THEN
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
ELSE
CASE
WHEN YEAR(@dt) = (SELECT YEAR(dt) FROM FactBillingFinal WHERE FactBillingId = @FactbillingId - 1)
THEN
CASE
WHEN @BillableType = 1
THEN (SELECT @YearTotalHours + (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId))
ELSE
CASE
WHEN @NewYearTotalHours IS NULL
THEN 0
ELSE @NewYearTotalHours
END
END
ELSE
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
END


END
END



UPDATE FactBillingFinal SET YearTotalHours = @NewYearTotalHours WHERE FactBillingId = @FactBillingId

SET @YearTotalHours = @NewYearTotalHours;

FETCH NEXT FROM FactBillingFinalRows
INTO @FactBillingId
END

CLOSE FactBillingFinalRows
DEALLOCATE FactBillingFinalRows




View 9 Replies View Related

Plz Tel Me Easier Way 2 Solve This Problem

Jan 7, 2008

Hi,

I have a table load which has load value for each hour.ie load_1,load_2...load_24... I want to find the max value between the 24 hourly loads and assign it to a variable say load_max...


Format of table
load_ID load_1 load_2 load_3 load_4 load_5 load_6...... load_24
1 2 4 5 6 7 8 23 56 44 22 64 33 67 24 345 34 75 57 24 23 24 24 66 789

These are the 24 load values with the load _id
I have lots of rows with load_id starting from 1- 100
Output should be to display the load_Id,load_max, load_min for each row...(after comparing the 24 loads with each other)
How can I do it with sql server.

View 7 Replies View Related

Making Things Easier

Jan 14, 2008

Good afternoon,

I've here a shell plugin and it's compiling fine and can be viewed in BI Dev Studio when choosing the DM technique using the proper wizard.

I also have here a K-Means implementation that estimates the number of clusters using a statistical semi-empiric index (the PBM index).

This implementation is done in C# and works fine. But it has to receive all the data of the database (all variables for each row) in order to do the proper vectorial calculations in a CSR (Compact Sparse Rows) way.

Besides, as you know, K-Means needs all the data at once because of the clusters mean (centroid) calculation.

So, I have some questions:

1) Where to place the call to the K-Means implementation in the shell passing as argument an object holding all the data ?


2) After this call, with the data clustered, what other objects must be modified in order to use Microsoft Cluster Viewer ?


3) I will need to create a new column or a new table on the database to specify which data belongs to which cluster. Can I open an ADO connection as I normally do in other programs from inside the plugin or is there another (easier/better) way to do so ?

Thanks a lot once more.

Best regards,

-Renan Souza

View 1 Replies View Related

Please Help With Oltp Solutions

Aug 28, 2007

hi. i dont understand what they mean when they say developing oltp solutions. can anybody pls explain it to me. also does anyone know what ways there are to develop sql oltp solutions using SQL 2005 reporting services, OLTO, Excel Services. as well as any good tutorials for it?

thanks for the help.

View 6 Replies View Related

Replication Solutions?

Jul 20, 2005

Hello everybodyI work at a company in Iceland and we have developed a 3-tier solutionwritten in ASP - Visual Basic - MSSQL2000, 4 companies are using thesolution almost constantly and accessing it through a browser. Theconnection has never gone down (yet) so that it has affected ourclients but we are thinking of how be able to run the solution locallyat every place and then create a replication to a main server that ishosted at our place.My question is: Does it affect speed for the clients that are usingthe solution or is there a better way of doing this?The solution is a ticket sale system and our clients use it every dayand people that sit at home should be able to order ticket online.Because of that we can't update the database every 5 minutes or 15minutes because we don't want a double booking in the same seat.Any help appreciated!- Sindri

View 1 Replies View Related

Solutions And Projects: What Are They And How Do They Help The DBA

Mar 10, 2006

I guess the Subject line sums it all up, but I need some experienced explanation of what do a solution and a project represent, and how do I use them to my advantage.

Is a solution an entire database? If so, how can I create a solution from my existing databases?

Are either of them a way to collect together scripts etc which will be run against a production database when the solution is rolled out?

What is a project? Is it a single set of scripts related to an upgrade to a database? If so, can it be executed as a single entity? How is the sequence of execution controlled?

And so on and so on....

2005 is such a step backwards for DBA's with all the features we used to have and now don't. If it wasn't for the fact that MS will eventually stop supporting 2000, I frankly see no incentive to upgrade myself.

Ok, OK, flame off.

Can anybody suggest some resources which might give me some insight into these questions?

Thanks,

-Rob

View 3 Replies View Related

Easier Email Alerts From Sql 7/2000?

Oct 5, 2006

hi,

i get just as frustrated each time i try to configure email alerts on failed jobs on ms sql, it is beyond me why microsoft couldn't just let you point out an SMTP server to send through and be done with it.

is there a way to avoid having to setup an email client on our sql 7 and 2000 servers through some 3rd party app or other simple solution?

thanks in advance,
daniel

View 2 Replies View Related

Replication--Too Many Questions---Too Little Solutions

Dec 15, 1998

1) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.

View 1 Replies View Related

Replication--Too Many Questions---Too Little Solutions

Dec 15, 1998

1) Do I have to install publishing on both servers (A and B) even
though one will be publisher and the other will be subscriber.


2)a. Can named pipes be used for communication between these two servers
which are on the same domain but not on the same network. Why or why
not, whatever the answer may be?
b. If I use TCP/IP, it the connection set up using the client
configuration utility? How is the connection string set up in this
case?
c. Suppose the publishing server was not using Net-Beui. Could this pose
any problems for communication. (is using lmhosts sufficient in this
scenario)


3) I have set up a (remote) SQL Server to be a Publisher/ Distributor.
Both SQL servers have been configured to be remote servers relative to
each other. Following are the steps I have carried out to set up
replication:

______On the Publication Server (a remote server)
I went to Server --> Replication Configuration ---> Install Publishing

Next, I chose a local distribution server. I think that the
instdis.sql script ran fine because the distribution database was
installed successfully.

___Next, I went into Manage Publications from Server menu to set up
the publications.


_____________When I went the subscription server to subscribe to the
published articles, I got the following error message:


Error 14093: [SQL Server] You must be System Administrator (SA)
or Database Owner (dbo) or Replication Subscriber (repl_subscriber)
to execute the stored procedure.




PS
Please Help

View 1 Replies View Related

Bidirectional Replication Solutions

Apr 9, 2008

According to this article from last year:http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1251149,00.htmlThese are the main options:* Merge Replication* Bi-directional Transactional replication* Immediate Updating* Queued Updating* Peer to Peer* RDAAre there any new alternatives that have popped up over the last year? Are all of six above still good options based on needs?We currently have a three server topology using merge replication.ServerA (App1DB) <--> ServerB (App2 DB)ServerA (App1DB) <--> ServerB (App3 DB)ServerA (App1DB) <--> ServerCServerA supports 1 intranet application using 1 DBServerB supports 2 extranet applications using 2 DB's (1 per application)ServerC is our DW server that we have installed a Search DB which is used by all applicationsPrior to our "upgrade" to merge replication we were using 1-way Transactional Replication so our topology looked like:ServerA --> ServerB (App2 DB)ServerA --> ServerB (App3 DB)We also had linked servers between ServerB and ServerA as well as between ServerC and ServerA to update data on ServerA. We would simultaneously update/insert the tables on ServerB/C and create custom stored procedures to handle the data already processed from the subscribers.With our new implementation we are seeing more latency as well as locking since merge replication is not running off of transaction logs anymore.My main question is would we see an increase in performance and less locking as a result of a topology like this:Master <--> ServerA (App1 DB)Master <--> ServerB (App2 DB)Master <--> ServerB (App3 DB)Master <--> ServerCWhere Master is a server and DB supporting no applications (hence no OLTP). Would latency be the same/better/worse? Should we stick with our current implementation and just performance tune it?A secondary question I have is given the bidirectional replication options above did we choose the best one for us? These servers are all on the same network hosted by the same provider over Gigabit Ethernet (I assume). I think we have the polling interval set at 5 seconds and we are thinking of moving it to 10 seconds at most. Real-time latency is not critical to our business but it would be a "nice to have". For conflict resolution we are keeping it simple, whichever was inserted/updated last "wins". It looks like Bi-directional Transactional replication might be a better option for us. Would it give us the autonomy we are looking for? Any major "cons" to using Bi-directional Transactional replication over merge replication (beside scalability). Scalability may come into play a few years down the road but for now it is not a high priority. Also would the Master model described above using Bi-directional Transactional replication be a successful implementation?ETA - One thing merge replication gives us is autonomy between our application servers, particularly when ServerA needs to come down for upgrades, the applications on ServerB can still function without any dependencies like we had before with 1-way transactional replication with linked server calls.

View 2 Replies View Related

Meta Integration Solutions

Apr 27, 2007

Hi,



Has anyone experience with Meta Integration Solutions and converting to SSIS ?

and converting to SQL ?

and converting to Analysis Server ?

and converting to Reporting Server ?



You can find their website http://www.metaintegration.net/Products/Overview/Solutions.html



Constantijn Enders



View 1 Replies View Related

Easier Method To Remove And Add A Table To Database?

Feb 22, 2000

I have a table that is corrupted and want to remove and add a backup version of it. How can i remove this table and add it again preserving all the foregin key restraints, permissions, dependencies, etc? Simply exporting and importing does not work. I could painfully remove the table and then painfully reconnect it again, recreating all the foreign key restraints, etc, by hand; but there has to be an easier way! What is the How-to?

Thank you!

Llyal

View 1 Replies View Related

Easier Way Of Building Pivot Tables In MS SQL Server

Jul 10, 2007

Dear AllI am very new to MS SQL Server and I am wondering is there some toolwhich would allow me to build pivot tables in SQL more easily. At themoment writing a query can be quite challenging and difficult.Is there any software which allows you to do it more intuitively andgives you some visual feedback about query you are building?I would be very grateful for any help with this.wujtehacjusz

View 2 Replies View Related

Subquery, Select Case, Pivot Help. Is There An Easier Way?

Apr 12, 2008



I'm trying to select from a table with three columns. I want these columns to be spread out among multiple columns based on the values. I hope someone can shed some light on this. I might be able to use pivot, but don't know how the syntax would roll for this.

Here is the example of dummy values and the output I am trying to obtain.




drop table table1

create table table1

(Category int, Place int, Value int)

insert into table1 values

(1, 1, 20)

insert into table1 values

(1,2, 12)

insert into table1 values

(1,3, 30)

insert into table1 values

(2,1, 34)

insert into table1 values

(2,2, 15)

insert into table1 values

(2,3, 78)



select Category,

(select top 1 value from table1 where place = 1 and Category = t1.Category) as place1,

(select top 1 value from table1 where place = 2 and Category = t1.Category) as place2,

(select top 1 value from table1 where place = 3 and Category = t1.Category) as place3

from Table1 t1

group by Category




Thanks for the help.

View 5 Replies View Related

There Must Be An Easier Way - Dupe Check And Increment Field

Nov 11, 2006

Greetings all,

I have a bit of brainteaser that's going to take some serious thought.

I'm importing information from .xls files into a SQL table. The problem is I need to check for dupes and increment certain fields on success of dupe find and then not insert or delete the dupes.

For example, if I have Adam, Turner, 32, 50 already in the table and someone tries to insert Adam, Turner, 32, 50...I need it to increment to read Adam, Turner, 64, 100 and not insert the record. (Notice 2 fields were incremented.)

With that, I have created an INSERT trigger as follows:

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Trigger [dbo].[trgInsertCheck]
ON [dbo].[MyTable]
FOR INSERT
AS
BEGIN
   EXEC sp_UpdateDupes
   EXEC sp_DeleteDupes
END

The first stored procedure checks for dupes and updates if any dupes are found as follows:
--------------------------------------------------------------------------------------------------------------------------------------

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Procedure [dbo].[sp_UpdateDupes] AS

DECLARE @FirstName varchar(20), @LastName varchar(20), @Age int,  @Widgets int
DECLARE c1 CURSOR FOR

SELECT FirstName, LastName, Age, Widgets
FROM MyTable
GROUP BY FirstName, LastName, Age, Widgets
HAVING COUNT(*) > 1

OPEN c1
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
WHILE @@FETCH_STATUS = 0
BEGIN



UPDATE MyTable set Widgets = Widgets + @Widgets, Age = Age + @Age
WHERE FirstName = @FirstName AND LastName = @LastName

FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
END
CLOSE c1
DEALLOCATE c1

Lastly, it finds all dupes, deletes them and inserts one row back in as follows:
--------------------------------------------------------------------------------------------------------------

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Procedure [dbo].[sp_DeleteDupes] AS

DECLARE @FirstName varchar(20), @LastName varchar(20), @Age int, @Widgets int --declare all fields in table

DECLARE c1 CURSOR FOR

SELECT FirstName, LastName, Age, Widgets
FROM MyTable
GROUP BY FirstName, LastName, Age, Widgets
HAVING COUNT(*) > 1

OPEN c1
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
WHILE @@FETCH_STATUS = 0

BEGIN
--Delete all dupes...the cursor remembers the current record

DELETE FROM MyTable
WHERE FirstName IN (SELECT FirstName FROM MyTable GROUP BY FirstName HAVING COUNT(FirstName) > 1)
AND LastName IN (SELECT LastName FROM MyTable GROUP BY LastName HAVING COUNT(LastName) > 1)
AND Age IN (SELECT Age FROM MyTable GROUP BY Age HAVING COUNT(Age) > 1)
AND Widgets IN (SELECT Widgets FROM MyTable GROUP BY Widgets HAVING COUNT(Widgets) > 1)

--insert the current record back into the table

INSERT INTO MyTable(FirstName, LastName, Age, Widgets) VALUES(@FirstName, @LastName, @Age, @Widgets) 

FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
END
CLOSE c1
DEALLOCATE c1

Is there an easier way to do this?

(I know Age doesn't make much sense in this example but just replace it with a field would logically be incremented such as wadgets.)

Adamus

View 7 Replies View Related

Is There An Easier Way? Two Aggregate Fields From One Based On Dates

Oct 19, 2007

When I was looking at this I knew that I've done this same issue before without using temp tables at all but I need a push to to jar it loose.

I have a table like this:







Balance
Date

1
200
2/14/2000

2
350
2/14/2000

3
32
2/14/2000

2
723
2/14/1998

3
354
2/14/1998





1
321
2/14/2000

2
673
2/14/1998

3
581
2/14/2000

2
574
2/14/1998

3
50
2/14/2000

1
10
2/14/2000

And essentially need this.










Total Balance Before 1/1/2000
Total Balance After 1/1/2000

1
0
531

2
1970
350

3
354
663



Right now I'm splitting it into two temp tables and then joining them together.

Temp Table 1 has the sum of the balance grouped by field1 before 1/1/2000 and Temp table 2 has the after 1/1/2000 criteria.

Table 1 gets updates with field1 values from table 2 that aren't already there. And then the balance field after 1/1/2000 is merged in.

Utimately this will be used in a SPROC for a Multivalued SSRS report.

View 3 Replies View Related

SQL Northwind End To End Database Solutions (examples)

Apr 30, 2004

Does anyone know where I can find a Northwind end to end database solutions (examples) written in ASP.NET (VB). I would like to reverse engineer this project to learn more about ASP.NET?

Thanks.

View 1 Replies View Related

Issues For Embedded SSIS Solutions

Mar 16, 2006

My warehouse app employs a distributed architecture. Extractions from disparate (wildly so) systems, and transformations and loads into a standardized schema are performed at various locations close to the source systems (both physically and "logically" speaking). There are security and other reasons for this. However this causes some related design and implementation challenges for the ETL processing.

For one, the ETL processes must be successfully operated by non-technical medical administrators, who actually have little interest in the application and sometimes even the analytics produced by the system, who have other more pressing day to day work they want to be doing, in organizations where turnover is high,training is spotty, and LANs are fragile and often congested.

So, real-time feedback to the operator during processing is pretty dern important. I have built a fairly sophisticated GUI (using .Net forms inside a script component) for the operational interface-input boxes just wouldn't cut it).

But that interface is lacking real time feedback as to processing progress at runtime.

Anyone got that T-shirt yet? I'm thinking I need progress bars and real-time task and component progress reports. Also. is there a way to capture the built in logging output in real-time?

View 5 Replies View Related

Solutions To Large Access In SQL 2005

Dec 17, 2007

I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?

View 5 Replies View Related

Need Links For Multiple Keyword Search Solutions Please

Jun 6, 2008

I have been informed that all my keyword search solutions are susceptible to SQL injection attacks.  Does anyone have links discussing basic ' multiple ' keyword search solutions?  I would think this is a very common routine (perhaps so much so than only newbies like myself do not know it).  I have read the posts about escaping ', doing replace " ' ", " '' ", using parameters and yet every multiple keyword solution I come up with is said to be injection prone.
Example: visitor enters:  Tom's antiquesinto a TextBox control and the C# code behind securely generates the below call to the database.
SELECT L_Name, L_City, L_State, L_Display FROM tblCompanies WHERE L_Kwords LIKE '%' + 'Tom's' + '%' AND L_Kwords LIKE '%' + 'antiques' + '%' AND L_Display = 1 RETURN
I understand that concantenting string parts using an array and then passing the sewn together string to a stored procedure exposes it to injection.  I hope that my single keyword routine below is secure, if it is not then I am not understanding how parameterized SP are supposed to be constructed to protect against injection.string CompanyName;CompanyName = TextBox1.Text;PROCEDURE CoNameSearch @CompanyName varchar(100)AS SELECT DISTINCT L_Name, L_Phone, L_City, L_State, L_Zip, L_Enabled, L_Display FROM tblLinksWHERE (L_Name LIKE @CompanyName + '%') AND L_Enabled = 1 AND L_Display = 1 ORDER BY L_NameRETURN
 

View 5 Replies View Related

Best Practice For Report Projects Related To Application Solutions

Feb 20, 2007

Hello Reporting Services Gurus!

I'm about to start on my first reporting services project, but before I mess it up, I'm looking for some guidance on how best to achieve my mission. Here's what I'm looking to achieve:

I have a datacentric application (SQL Server 2005 Express w/ Advanced Services backend) in which I want to build about 50 "canned" reports for the end users. I want to build the reports utilizing server mode so I can take advantage of some of Reporting Services advanced features. I'm not sure what the best practice would be to build the reporting services project. Is it better to include the report project as another project within the application solution? Or, should I build the report project independent of the application solution? What are the pros and cons of doing it either way? How does including the report project build if it's included in the application solution? How would a ClickOnce deployment deploy the report project to the report server?

My ultimate goal would be to have an "off-the-shelf" software solution that includes an installation package consisting of the application project and report project. Is it even possible due to the Reporting Services architecture to achieve an install in this manner with ClickOnce, Windows Installer, or Installshield? Or, is building the report project indepedent of the application project and deploying the reports to the report server "manually" (i.e. deploy within the report server project) the only solution?

Any help would be greatly appreciated!

Tony

View 1 Replies View Related

SQL Server 2005 Non-ansi Joins: Any Easy Solutions?

Jul 20, 2006

My company wants me to research and flags or registry tricks that would allow non-ansi joins '=*' and '*=' in SQL Server 2005 with a compatiblity mode of 90 to be allowed.

The way I understand the situation is that in SQL Server 2005 with the database compatiblity set to 90, non-ansi join SQL such as the following would not work.

Select * from
Customer, Sales
Where Customer.CustomerID *= Sales.CustomerID

To work, the SQL above would have to be converted to ansi join SQL such as the following:

Select * from
Customer LEFT OUTER JOIN Sales
On Customer.CustomerID = Sales.CustomerID

Many hours would be spent browsing through millions of lines of code to find the non-ansi SQL and have changes made.

Does anyone know of any trace flaqs or registry entries that would allow SQL Server 2005 work in 90 compatiblity and still allow non-ansi =* and *= joins in SQL?

Thanks,
AIMDBA

View 3 Replies View Related

Any Good Whitepapers On Security/deployment For Entire SQL Server BI Solutions?

Aug 1, 2007

At my current employer we are struggling with the best way to manage security and deployment of a project that contains databases, SSIS, SSAS and SSRS components, using configurations.

Environment (Dev):
3 SQL Server databases, all using mixed-mode security, using SQL Server security credentials.
12 SSIS packages; one master package, eleven child packages, 3 shared data sources
1 SSAS database; one cube, 15 dimensions, three referenced data sources from the SSIS project (in same solution)
6 SSRS reports, one data source to cube (not shared- doesn't appear SSRS can share datasources among other projects in the solution? Why?)

Everything runs fine in development. Now comes the tricky part.

Deploying SSIS and SSAS into production environments:

-Packages use XML config files for connection strings to three relational data sources.
-Deploy to SQL Server storage. Deploy wizard copies package dependencies (including XML config files) to default location set in INI file. When I do this, no config file shows up in remote server (remote server not set up identical to local, so directory does not exist. Need UNC path?) So, being a developer with no "special" permissions on the PROD server, what security permissions is allowing the deployment wizard from copying files to this location on a production server?
-Using a deploy script using dtutil doesn't copy the SSIS dependencies. Is this matter of using COPY or XCOPY to copy the configuration files to the dependency location? Again, in real-world practice, do developers typically change this location in the INI file to another location, or stick with the default. In either case, how does security work that allows files to get copied to the remote folder? (i.e. manual, or SQL Server manages this file folder permission through some other magic)
When using SSMS and running the package after being deployed on the remote server, if the config path is the default (e.g. C:program filesMicrosoft SQL Server90DTSPackages...) it appears to be read from the local machines directory rather than the remote machines directory path (do I need to use UNC paths? The wizard doesn't give this option it seems)
-When scheduling the job from SQL Agent, does the proxy account need permissions to the folder the config files sit in?
-What about the roles security on the packages themselves? Where does the server roles come into play (dtsltuser, dtsadmin)
-Because the SSAS project uses connection references to the SSIS project in BIDS, and SSIS project uses configurations, will SSAS pick up on these connections?
-What about impersonation levels for SSAS? Leave all data sources set to default, and set the database impersonation level to "UseServiceAccount"? What if the developer is not the same as the OLAP administrator on the production server? In this case, Use Service Account isn't an option, and neither is the current users credentials.
-SSAS database also has security for Full Control, but still doesn't prevent security at the data source level within the database (talking about impersonation level, not source db credentials)
-How can SSRS connections leverage other shared connections?


As you can see, there are a ton of security considerations, none of which are intuitive and can be configured multiple ways and actually work (and a ton of ways that won't work).

I need a simple cheat-sheet about each step to take to configure this so multiple developers can work without interruption, hot-deploying SSIS, SSAS, and SSRS changes into different environments (QA, PROD).

-Kory

View 2 Replies View Related

Disaster Recovery

Feb 23, 2004

Hello,

I have a problem. I am trying to set up a disaster recovery plan and have ran into some problems. Just so you are aware, we have tried setting up replication but it failed since the database does not have primary keys(don't ask--I have no control) and we have tried to set up log shipping but we do not have the Enterprise version of SQL server so that won't work. The database is over 200GB so shipping by tape or transferring full backups across the wire is impractical. So basically my question is - "What other options can we explore for setting up disaster recovery that do not involve 'substantial' expenditures?"

There is a direct connection set up between the off site data store.

Thanks

View 12 Replies View Related

Disaster Recovery 6.5

May 11, 2000

I have the Master, Msdb and User Database dumps. I want to build another server with the avilable dumps.
Please help me....

Clark

View 2 Replies View Related

Moron Causes Disaster!

Oct 1, 2000

I thought I had a good backup of my database, but it turns out my Backup Executive jobs were misconfigured and
my backup backup via MS Backup only grabbed the files that weren't in use. (.bak file)

I re-installed my Server 2000 to fix some other problems and now I've lost all my database. My wife (the MD) is
very upset over losing her schedule, billing info, and all her medical records. Whoops.

Anyway, I have two BAK files from one (and two) days ago, but when I replace them into the database, no data shows up.
The BAK files were created on an NTFS system, which is now running FAT32 - I don't think that should matter?.

Is there any hope or do I start typing?

View 1 Replies View Related

Disaster Recovery

Feb 2, 2001

Hi,
A test server I was developing on has died and all of my good work was on it.
However I have a copy of the Data folder containing the folloing files:

Test_Data.MDF

Test_Log.LDF

Is it possible to restore my database from these files?
If so how would I go about it?

Many thanks in advance!

View 1 Replies View Related

Recovering From A Disaster

Nov 16, 2001

We are having major hardware replaced on our W2000 Adv Server and we
need to be prepared to restore the entire system.
If the hardware fix goes badly, worst case, the plan is to
rebuild the server and then REINSTALL SS7 and
then restore all the databases from tape backups.
My question is, which database(s) should we restore first?
Should we restore master and then proceed to restore the user
databases or restore the user databases first and restore master last?
Thank you in advance.

View 1 Replies View Related

Disaster Recovery Beyond SQL.

Dec 23, 2002

I'm looking for something beyond trying to recover databases and/or SQLServer. My group has been tasked with documenting the recovery for 30+ servers that make up our applications infrastructure including SQL, Web, etc. I posted this here as opposed to another forum because I couldn't even find disaster recovery related topics in the other forums.

Our Systems group (Lan, Wan, etc.) isn't really much help at all. Their focus is more on the mainframe and Novell network recovery. Their perspective is that they will get any replacement server back on line in the event of failure but we are on our own from there.

There are three disaster scenarios for which we must plan: 1) A 737 does a nose dive into the Data Center. 2) A hurricane floods the Data Center. 3) A server gets smoked for whatever reason and is no longer useable.

Assume...
- that there were enough survivors to rebuild the network infrastructure at a remote site.

- we have to recover the operating system, additional software, IIS, SQL, applications, etc.

Questions:
What's the best way to capture the server's configuration on an automated basis?

What's the best way to recover the configuration mentioned above?

How do you maintain a current snapshot of any particular server and how do you ensure there's a documented history of the changes?

Does anybody use a centralized backup system? OmniBack, BackupExec?

What's the best/fastest way to get the replacement equipment?

Where else can I look to find this information?

Have I asked all the questions that I should or can you think of one I've missed?

Confused? Ask any questions you wish.

Sidney Ives
Database Administrator.

View 4 Replies View Related

Disaster Recovery

Mar 17, 2004

Yikes! I lost my autoloader bakcup device along with my SQL2K Server. I reinstalled Win2k and now can see my RAID partitions where I had SQL server installed, along with all of the files. Since my tape backup is dead, I cannot simply restore from media. I'd like to reinstall SQL Server and then somehow grab the files on the partition and restore all of the data back to where things were. Is this even possible? If so, could someone help me out? I am worried by reading this list that my primary application (Sharepoint) will not be able to be restored because of security issues. Any ideas on that?

I appreciate any help you can provide.

-Den

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved