There Must Be An Easier/better Way...
Oct 26, 2005
I have a database with table representing city blocks, houses, and people. On my main aspx page, I have a datagrid which displays a list of the blocks, followed by a count of the houses in each block, followed by a count of the people in each block. Right now I have something that works, but it is awful. Every time the page loads it makes a ton of connections to the database, and I have convoluted spaghetti code. Now I need to add more functionality, but I can't rightly do that until I find a more efficient way to do this.Step one. The program connects to the database and gets a list of the blocks using the statement "SELECT blockid FROM blocks"Step two: The program iterates through each blockid in the list and executes the statement "SELECT houseid FROM houses WHERE blockid = (whatever)"Step three: The program counts the rows returned from step two to determine the count of how many houses are in that block. Step four: The program iterates through each houseid from step two and exectues the statement "SELECT COUNT (personid) FROM people WHERE houseid = (whatever)" the result is added to a variable that keeps a running count.Step five: the final value of the variable in step four is the number of people in that block.My question for you is, how can this be done more efficiently? Can I group together some awesome SQL statement that will get these counts? I thought about doing something like "SELECT blockid (SELECT COUNT houseid FROM houses WHERE blockid = something) as HouseCount" but I can't figure out how I could take the value in the first column (blockid) and pass it to the inner select statement.Any thoughts on how to make this better? Below is the full code for my function, in case you want to examine in more detail. Also, I am in the process of changing the select statements into stored procedures, so don't beat me up too badly over that bit of ugliness in my function. Thanks.Private Function GetBlockDataSet() As DataSet Dim myconnection As SqlConnection Dim objDataAdapter As SqlDataAdapter Dim query, connectionstring As String Dim tempDS As New DataSet Dim houseDS As New DataSet Dim peopleDS As New DataSet Dim DC1 As New DataColumn Dim DC2 As New DataColumn Dim i, j, peoplecount As Int32 Dim DR, DR2 As DataRow
query = "SELECT blockid FROM blocks" connectionstring = configurationsettsing.appsettings("ConnectionString")
myconnection = New SqlConnection(connectionstring) objDataAdapter = New SqlDataAdapter(query, myconnection) objDataAdapter.Fill(tempDS, "BlockList")
DC1.DataType = System.Type.GetType("System.Int32") DC2.DataType = System.Type.GetType("System.Int32") DC1.ColumnName = "HouseCount" DC2.ColumnName = "PeopleCount" tempDS.Tables("BlockList").Columns.Add(DC1) tempDS.Tables("blockList").Columns.Add(DC2)
i = 0 For Each DR In tempDS.Tables("BlockList").Rows query = "SELECT houseid FROM Houses WHERE blockid = '" query &= tempDS.Tables("BlockList").Rows(i).Item(0) query &= "'" objDataAdapter = New SqlDataAdapter(query, myconnection) objDataAdapter.Fill(houseDS)
tempDS.Tables("BlockList").Rows(i).Item(1) = _ houseDS.Tables(0).Rows.Count tempDS.Tables("BlockList").Rows(i).Item(2) = 0 j = 0 peoplecount = 0 For Each DR2 In houseDS.Tables(0).Rows query = "SELECT COUNT (personid) FROM people WHERE HouseID = '" query &= houseDS.Tables(0).Rows(j).Item(0) query &= "'" objDataAdapter = New SqlDataAdapter(query, myconnection) objDataAdapter.Fill(peopleDS) peoplecount += peopleDS.Tables(0).Rows(0).Item(0) j = j + 1 peopleDS.Clear() Next tempDS.Tables("BlockList").Rows(i).Item(2) = peoplecount houseDS.Clear()
i = i + 1 Next
GetBlockDataSet = tempDS ' Here comes the garbage collection myconnection.Close() myconnection.Dispose() myconnection = Nothing objDataAdapter.Dispose() objDataAdapter = Nothing tempDS.Dispose() tempDS = Nothing houseDS.Dispose() houseDS = Nothing peopleDS.Dispose() peopleDS = Nothing DC1.Dispose() DC1 = Nothing DC2.Dispose() DC2 = Nothing
End Function
View 5 Replies
ADVERTISEMENT
Apr 20, 2007
This statement returns date formatted 'yyyymmdhhnn'. But there has to be an easier way. Can someone help?
DECLARE @dt datetime;
SELECT @dt = GETDATE();
SELECT CONVERT(varchar(40),@dt,112) +
RIGHT('0' + CAST(DATEPART("hh", @dt) AS varchar(2)), 2) +
RIGHT('0' + CAST(DATEPART("mi", @dt) AS varchar(2)), 2) AS isodt;
Thank you.
View 4 Replies
View Related
Feb 20, 2008
Hello,
I just finished a lengthy process (for me) of writing this cursor that gets each row from the Fact table, and creates a running total of the billable hours (hours from rows where BillableType = 1). It starts over when the month changes, the year changes, or the employee changes.
I did this again for a running total of billable hours for the year.
I put these in 2 stored procedures and run them in an Execute SQL control flow task in my SSIS package. It seems like SSIS is designed to make this kind of procedure simpler, and I'm wondering if I'm doing this in the best way, or if there's a more efficient way to do this using tasks inside SSIS. Can anyone advise? Any help is greatly appreciated.
Best,
Andy
Don't feel obligated to read this cursor if you understand the problem above. It's not color coded because when I copied it from SQL I lost the tabbing on the case expressions.
Code Snippet
CREATE PROCEDURE YearTotalHours AS
DECLARE @FactBillingId Int;
DECLARE @dt DateTime;
DECLARE @EmpKey Int;
DECLARE @YearTotalHours Float;
DECLARE @NewYearTotalHours Float;
DECLARE @NewYear Int;
DECLARE @BillableType Int;
DECLARE FactBillingFinalRows CURSOR FOR
SELECT FactBillingId
FROM FactBillingFinal
OPEN FactBillingFinalRows
FETCH NEXT FROM FactBillingFinalRows
INTO @FactBillingId
WHILE @@FETCH_STATUS = 0
BEGIN
SET @EmpKey = (SELECT EmployeeKey FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
SET @dt = (SELECT dt FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
SET @BillableType = (SELECT BillableTypeKey FROM FactBillingFinal WHERE FactbillingId = @FactBillingId)
SET @NewYear = (SELECT Year(@dt))
SET @NewYearTotalHours =
CASE
WHEN @FactBillingId = 1 THEN
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
ELSE
CASE
WHEN @EmpKey <> (SELECT EmployeeKey FROM FactBillingFinal WHERE FactBillingId = @FactBillingId - 1)
THEN
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
ELSE
CASE
WHEN YEAR(@dt) = (SELECT YEAR(dt) FROM FactBillingFinal WHERE FactBillingId = @FactbillingId - 1)
THEN
CASE
WHEN @BillableType = 1
THEN (SELECT @YearTotalHours + (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId))
ELSE
CASE
WHEN @NewYearTotalHours IS NULL
THEN 0
ELSE @NewYearTotalHours
END
END
ELSE
CASE
WHEN @BillableType = 1 THEN (SELECT Hours FROM FactBillingFinal WHERE FactBillingId = @FactBillingId)
ELSE 0
END
END
END
END
UPDATE FactBillingFinal SET YearTotalHours = @NewYearTotalHours WHERE FactBillingId = @FactBillingId
SET @YearTotalHours = @NewYearTotalHours;
FETCH NEXT FROM FactBillingFinalRows
INTO @FactBillingId
END
CLOSE FactBillingFinalRows
DEALLOCATE FactBillingFinalRows
View 9 Replies
View Related
Jan 7, 2008
Hi,
I have a table load which has load value for each hour.ie load_1,load_2...load_24... I want to find the max value between the 24 hourly loads and assign it to a variable say load_max...
Format of table
load_ID load_1 load_2 load_3 load_4 load_5 load_6...... load_24
1 2 4 5 6 7 8 23 56 44 22 64 33 67 24 345 34 75 57 24 23 24 24 66 789
These are the 24 load values with the load _id
I have lots of rows with load_id starting from 1- 100
Output should be to display the load_Id,load_max, load_min for each row...(after comparing the 24 loads with each other)
How can I do it with sql server.
View 7 Replies
View Related
Jan 14, 2008
Good afternoon,
I've here a shell plugin and it's compiling fine and can be viewed in BI Dev Studio when choosing the DM technique using the proper wizard.
I also have here a K-Means implementation that estimates the number of clusters using a statistical semi-empiric index (the PBM index).
This implementation is done in C# and works fine. But it has to receive all the data of the database (all variables for each row) in order to do the proper vectorial calculations in a CSR (Compact Sparse Rows) way.
Besides, as you know, K-Means needs all the data at once because of the clusters mean (centroid) calculation.
So, I have some questions:
1) Where to place the call to the K-Means implementation in the shell passing as argument an object holding all the data ?
2) After this call, with the data clustered, what other objects must be modified in order to use Microsoft Cluster Viewer ?
3) I will need to create a new column or a new table on the database to specify which data belongs to which cluster. Can I open an ADO connection as I normally do in other programs from inside the plugin or is there another (easier/better) way to do so ?
Thanks a lot once more.
Best regards,
-Renan Souza
View 1 Replies
View Related
Oct 5, 2006
hi,
i get just as frustrated each time i try to configure email alerts on failed jobs on ms sql, it is beyond me why microsoft couldn't just let you point out an SMTP server to send through and be done with it.
is there a way to avoid having to setup an email client on our sql 7 and 2000 servers through some 3rd party app or other simple solution?
thanks in advance,
daniel
View 2 Replies
View Related
Feb 22, 2000
I have a table that is corrupted and want to remove and add a backup version of it. How can i remove this table and add it again preserving all the foregin key restraints, permissions, dependencies, etc? Simply exporting and importing does not work. I could painfully remove the table and then painfully reconnect it again, recreating all the foreign key restraints, etc, by hand; but there has to be an easier way! What is the How-to?
Thank you!
Llyal
View 1 Replies
View Related
Jul 10, 2007
Dear AllI am very new to MS SQL Server and I am wondering is there some toolwhich would allow me to build pivot tables in SQL more easily. At themoment writing a query can be quite challenging and difficult.Is there any software which allows you to do it more intuitively andgives you some visual feedback about query you are building?I would be very grateful for any help with this.wujtehacjusz
View 2 Replies
View Related
Apr 18, 2007
I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.
I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.
We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.
Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?
View 5 Replies
View Related
Apr 12, 2008
I'm trying to select from a table with three columns. I want these columns to be spread out among multiple columns based on the values. I hope someone can shed some light on this. I might be able to use pivot, but don't know how the syntax would roll for this.
Here is the example of dummy values and the output I am trying to obtain.
drop table table1
create table table1
(Category int, Place int, Value int)
insert into table1 values
(1, 1, 20)
insert into table1 values
(1,2, 12)
insert into table1 values
(1,3, 30)
insert into table1 values
(2,1, 34)
insert into table1 values
(2,2, 15)
insert into table1 values
(2,3, 78)
select Category,
(select top 1 value from table1 where place = 1 and Category = t1.Category) as place1,
(select top 1 value from table1 where place = 2 and Category = t1.Category) as place2,
(select top 1 value from table1 where place = 3 and Category = t1.Category) as place3
from Table1 t1
group by Category
Thanks for the help.
View 5 Replies
View Related
Nov 11, 2006
Greetings all,
I have a bit of brainteaser that's going to take some serious thought.
I'm importing information from .xls files into a SQL table. The problem is I need to check for dupes and increment certain fields on success of dupe find and then not insert or delete the dupes.
For example, if I have Adam, Turner, 32, 50 already in the table and someone tries to insert Adam, Turner, 32, 50...I need it to increment to read Adam, Turner, 64, 100 and not insert the record. (Notice 2 fields were incremented.)
With that, I have created an INSERT trigger as follows:
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Trigger [dbo].[trgInsertCheck]
ON [dbo].[MyTable]
FOR INSERT
AS
BEGIN
EXEC sp_UpdateDupes
EXEC sp_DeleteDupes
END
The first stored procedure checks for dupes and updates if any dupes are found as follows:
--------------------------------------------------------------------------------------------------------------------------------------
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Procedure [dbo].[sp_UpdateDupes] AS
DECLARE @FirstName varchar(20), @LastName varchar(20), @Age int, @Widgets int
DECLARE c1 CURSOR FOR
SELECT FirstName, LastName, Age, Widgets
FROM MyTable
GROUP BY FirstName, LastName, Age, Widgets
HAVING COUNT(*) > 1
OPEN c1
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE MyTable set Widgets = Widgets + @Widgets, Age = Age + @Age
WHERE FirstName = @FirstName AND LastName = @LastName
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
END
CLOSE c1
DEALLOCATE c1
Lastly, it finds all dupes, deletes them and inserts one row back in as follows:
--------------------------------------------------------------------------------------------------------------
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER Procedure [dbo].[sp_DeleteDupes] AS
DECLARE @FirstName varchar(20), @LastName varchar(20), @Age int, @Widgets int --declare all fields in table
DECLARE c1 CURSOR FOR
SELECT FirstName, LastName, Age, Widgets
FROM MyTable
GROUP BY FirstName, LastName, Age, Widgets
HAVING COUNT(*) > 1
OPEN c1
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
WHILE @@FETCH_STATUS = 0
BEGIN
--Delete all dupes...the cursor remembers the current record
DELETE FROM MyTable
WHERE FirstName IN (SELECT FirstName FROM MyTable GROUP BY FirstName HAVING COUNT(FirstName) > 1)
AND LastName IN (SELECT LastName FROM MyTable GROUP BY LastName HAVING COUNT(LastName) > 1)
AND Age IN (SELECT Age FROM MyTable GROUP BY Age HAVING COUNT(Age) > 1)
AND Widgets IN (SELECT Widgets FROM MyTable GROUP BY Widgets HAVING COUNT(Widgets) > 1)
--insert the current record back into the table
INSERT INTO MyTable(FirstName, LastName, Age, Widgets) VALUES(@FirstName, @LastName, @Age, @Widgets)
FETCH NEXT FROM c1
INTO @FirstName, @LastName, @Age, @Widgets
END
CLOSE c1
DEALLOCATE c1
Is there an easier way to do this?
(I know Age doesn't make much sense in this example but just replace it with a field would logically be incremented such as wadgets.)
Adamus
View 7 Replies
View Related
Oct 19, 2007
When I was looking at this I knew that I've done this same issue before without using temp tables at all but I need a push to to jar it loose.
I have a table like this:
Balance
Date
1
200
2/14/2000
2
350
2/14/2000
3
32
2/14/2000
2
723
2/14/1998
3
354
2/14/1998
1
321
2/14/2000
2
673
2/14/1998
3
581
2/14/2000
2
574
2/14/1998
3
50
2/14/2000
1
10
2/14/2000
And essentially need this.
Total Balance Before 1/1/2000
Total Balance After 1/1/2000
1
0
531
2
1970
350
3
354
663
Right now I'm splitting it into two temp tables and then joining them together.
Temp Table 1 has the sum of the balance grouped by field1 before 1/1/2000 and Temp table 2 has the after 1/1/2000 criteria.
Table 1 gets updates with field1 values from table 2 that aren't already there. And then the balance field after 1/1/2000 is merged in.
Utimately this will be used in a SPROC for a Multivalued SSRS report.
View 3 Replies
View Related
Nov 8, 2006
I have built a large package and due to database changes (varchar to nvarchar) I need to do a data conversion of all the flat file columns I am bringing in, to a unicode data type. The way I know how to do this is via the data conversion component/task. My question is, I am looking for an easy way to "Do All Columns" and "Map all Columns" without doing every column by hand in both spots.
I need to change all the columns, can I do this in mass? More importantly once I convert all these and connect it to my data source it fails to map converted fields by name. Is there a way when using the data conversion task to still get it to map by name when connecting it to the OLE destination?
I know I can use the wizard to create the base package, but I have already built all the other components, renamed and set the data type and size on all the columns (over 300) and so I don't want to have to re-do all that work. What is the best solution?
In general I would be happy if I could get the post data conversion to map automatically to the source. But because its DataConversion.CustomerID it will not map to CustomerID field on destination. Any suggestions on the best way to do this would save me hours of work...
Thanks.
View 1 Replies
View Related