Hi, I'm currently trying to retrieve results from a large dataset, there are over 45000 records and I need to use them all to peform counts etc. I have set up views, but my page is still being returned slowly, is there anything I can do to speed this up? Thanks Gemma
We are trying to limit are query that returns items from our database. The query currently returns 32,000 records. We are trying to figure out an effecient way so we can request the 1st 50, or the 3rd 50, or the 5th 50 to display to the screen. We dont want to return the entire 32,000 then limit whats displayed to the screen in ADO. We want the select statment to only return 50 at a time. Any suggestions?
I have an SSIS package (SQL 2005 SP2 and Visual Studio SP1) that does the following:
OLE DB Source --> Conditional Split --> OLE DB Command #1 --> OLE DB Command #2
The source reads from database A. Each row is variable-width and up to several KB wide, including two ntext columns.
Command #1 executes a stored proc in db A, using a bunch of inputs and two output parameters.
Cmd #2 executes an update in db B, using the two output params from cmd #1 as inputs.
When the rowset size is small, around 500, everything works fine.
However, when the rowset size is larger, around 5000, SSIS hangs when trying to execute cmd #2. The profiler shows that none of the cmd #2 updates are ever executed. No error messages are produced, and the connection never times out -- it just hangs forever.
If I replace the cmd #2 updates with a simple select, everything works fine. If I replace it with a stored proc that does an update, it hangs.
The work-around I came up with was to create a new table in db B, and do inserts into the table, but unless I'm missing something, this still seems like a bug...
so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.
Example: --enable async cursor exec dbo.sp_configure 'cursor threshold', 0; reconfigure; declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int; --example of giant result set set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';
[code]...
Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.
Yet another very much newbie question I suspect€¦.
I want to know how I copy data from one field in one DataBase to another field in another DataBase??
Basically I have an ASP driven forum, and want to upgrade to a new ASP.NET forum€¦ Problem is obviously the DataBase structure is different and I don€™t want to loose all the data from my current forum€¦ So I would have to pick which table field needs to be copied to where in the new DataBase..
Is this easy to do?? I will be using SQLExpress 2005??
Hi all!I have an application that needs to copy the database structure fromone database to another without using the "Generate SQL Script"function in Enterprise Manager. I'd like to do this from within astored procedure. Can someone recommend the best approach for this?I've seen references to using SQL-DMO from a stored procedure using thesp_OA* procs in other postings to this group but was wondering if therewas an easier way? Can I use bcp and then use xp_cmdshell from withinmy stored procedure? It's not clear to me from the documentationwhether bcp copies both structure and data or just data? Is there abetter way?Thanks in advance for any help!Karen
Hi. I need to move data from one database table to another across database instances. A simple example of the typical move would be:
[CODE]
INSERT into destination_db.dbo.table1
SELECT column1, column2, column3, column4 from source_db.dbo.table2
[/CODE]
My options are:
1. Create an SSIS package to perform the move.
2. Create sprocs and schedule the data move as jobs.
3. Write .NET code using sprocs to perform the move.
I'll have to move hundreds of thousands of records, so I want the option that provides the best performance. I'm guessing that option 3 will be the slowest.
... and preserving the relationships.(Note, this is more of a SQL question than a SQL-related ASP.NET question...)Say I have two databases, D1 and D2, with the same three tables:CompaniesDepartmentsEmployeesWith the standard one-to-many relationships down the line, with eachtable having a PK, IDENTITY field like CompanyID, DepartmentID, andEmployeeID.I have a smattering of data in each of D1 and D2 for these tables withoverlaps in the ID field values. What I want to be able to do is copyover D1's data to D2, preserving the relationships in D1 even thoughthere are ID overlaps in D2. So the tool I'd use would have to besmart enough to check for ID dups in D2 and appropriate change the IDvalues in D1's tables, maintaining the relationships.Is there some built-in SQL tool to do this or do I have to do this myself?Thanks!
guys,I have a project need to move more than 100,000 records from onedatabase table to another database table every week. Currently, usersinput date range from web UI, my store procedure will take those dateranges to INSERT records to a table in another database, then deletethe records, but it will take really long time to finish this action(up to 1 or 2 hours).My question is if there is some other way I should do to speed up theaction, I am thinking about use bcp to copy those records to datafileand then use bcp to insert it into SQL Server table. Is this the rightway to do it or should I consider other solution (then, what is thesolution.)Thanks a lot!
When copying data to a remote SQL2K5 destination from a SQL2K5 source database, both using mixed sql server security mode, my job generates the following error:
[Transfer SQL Server Objects Task] Error: Execution failed with the following error: "Cannot apply value null to property Login: Value cannot be null..".
This occurs after the destination database tables have been truncated and replacement data from the source would begin to copy.
The same process can be successfully completed from the Management studio with a simple data export process. However, when I run the saved package again from the BI interface, I get this error.
My search engine searches have yielded numerous hits of others having the same problem with one microsoft rep indicating it was a bug and would be resolved in sp1. I am working with sp1. Oddly, there is only mention of this in the forums. No KB article from MS addresses the problem and I do not see it addressed elsewhere at sqlservercentral.
It appears that others have switched to Integrated Security and resolved the problem. However, I do not have that option with a remotely hosted database.
Does anyone have any information concerning this problem?
Hi i wanna delete all the records from an large database 200 -300 tables, because i want make some changes an start from scratch,but keep the structures of the database key , index etc, i tried to generate script but when i run to many errors , plz help 10x
I'm currently working on a BI architecture for a customer, and consider to propose the Power BI data catalog as a data distribution layer. The customer will use Power BI, but also has other BI tools.
Are data sets in the data catalog available to other clients than Power Query alone? E.g. are there OData feed endpoints available? If not, what would be the best way to give other tools access to the data?
Hi, I was wondering how it is posible to join three data sets from different data flows into one txt file. Let's explain a little more:
I have 3 dataflows. Each of them connect to sql server and and by a SQL command, they bring data into SSIS.
Each SQL command differ between them. So each data set have different columns (they dont have the same format). Also the amount of columns differ between each one.
What I need is to join the three data sets into one txt file. How can I do this? It is posible to join them with different data set formats into a txt file?
Is this the best way to join different data? It is better to use as many OLE DB Sources are needed instead of different data flows? Thanks for your help!
I am trying to query one table and get two different timeperiods of data, I am summing monthly totals to provide a running year total, but I also need last month's total in a seperate column. This is what I have so far but the subquery makes me group it which provides duplicate grouping.DECLARE @LASTPD AS INT SET @LASTPD = (SELECT MAX(LASTPERIOD) FROM TABLE) SELECT NAME, POST_PD AS [MONTH],SUM(CHARGE_AMOUNT) AS MONTHLY_$, LASTMONTH.LAST_MONTH,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME WHERE POST_PD = @LASTPD AND TABLE2.NUM= 539 GROUP BY NAME) AS LASTMONTH INTO #TEMP_SAFROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLEWHERE TABLE2.NUM = 539 GROUP BY NAME, POST_PDORDER BY NAME, POST_PD SELECT NAME, LAST_MONTH, CAST(SUM(MONTHLY_$)AS DECIMAL(20,2)) AS YEARLY_$ FROM #TEMP_SA GROUP BY NAME ORDER BY NAME
I would like to match two sets of data. I have setup a view of data that contains a group of customers and their details. I want to view this data, but also find these customers in another table based on matching their surname and date of birth, then retreive the information stored on these customers from the second table.
Does anyone have any suggestions how i would go about doing this?
Thanks in advance Humate
quote:Originally posted by Michael Valentine Jones
It takes real skill to produce something good out of a giant mess.
I have the following situation. One set of data has 274 rows (set2)and anther has 264 (set1). Both data sets are similar in structure aswell as values for both of them were extracts from the same parenttable. Hope the info would substitute DDL. I need to find the "gap"rows between these two sets.Attempted to run a query likeselect count(*)from set2where not exists(select *from set1)did not yield what I desired. What else to try?TIA.
I have "inherited" a project working on a SQL 2000 database. The project calculates commissions based on data from an invoice header table and an invoice details table. The goal is to extract data from the primary database tables, perform limited manipulations, and store the resultant data into a table in a different database for reference and reporting. I have the query complete that extracts and manipulates the data, but I am stuck in trying to add this data to the final storage/reporting table. I would also like to do error checking to be certain that a record is not "re-inserted" from the source data to the destination table. Thanks in advance, Barry
I have two tables - one with sales and another with payments against those. The payment may not match the exact amount of sales and I have to use FIFO method to apply payments. The payment month must >= sales month.
How can i write a query to do this? Examples are as below.
Table 1 Sales Sale DateSale Amt 1Jun-141200 2Oct-142400 3Dec-14600 4Feb-1512000
I've seen that sometimes is better to split the table into a test dataset and a training dataset, and I'll appreciate if anyone can explain why is this...
Is there a way to put more then one data set in a list. I have a report that has three data sets with three tables. Now i want to show each report by Region, per page. So you can view the same stuff for each region seperately, instead of all together. Is there a way to do this. Where i dont have to go back in my code, and find a way to link everything together, so its in one data set .
I have designed a contact manager with Data Grid Control bound to a Data Set.
When the application closes, data from Data Set is written to XML file and when application opens, data from XML file is loded into Data Set and is show in Data Grid control.
Contacts in my application can exceed over 1,000 So, Is Data Set capable of handling lot of data very efficiently in memory?
Hello, I am using existing code, which I am trying to convert from using MS Access to SQL Server 2005... The data set works fine with MS Access database, however when executing with SQL Server 2005 as data source, it generates the following error: "..The data types ntext and nvarchar are incompatible in the equal to operator..." in this line: count = adapter.Update(dataset); Not sure what should I look for since data sets are new to me.. Where should I check to fix this problem? I have noticed that the table has two columns with nvarchar...
I have two queries that generate two different datasets. One is a count of memebers, and the other is count of admits. I need to generate a calculated field from the two data sets called admits per 1000, which is essential the count of admits/counts of members *12000 I was able to calculte admits per 1000 easily in excel, however I need some insight on how to do is SQL.
Below are my queries from the two datasets.
MemberMonths dataset: Select factMembership.BusinessUnitCode, EffectiveCCYYMM, ISNULL(count(Distinct MemberId),0) As MemberCount From factMembership
[Code] ....
Admits dataset:
SELECT Factadmissions.BusinessUnitCode, factAdmissions.AdmitCCYYMM, ISNULL(Count(AdmitNum),0)As [Count of Admits] FROM factAdmissions
What I need to be able to do is somehow select based on a day, the total value of open orders. Â I have tried to do this in the database but it becomes fixed and quite cumbersome (this is a simplified example in reality i have line information and line component information).I am not hugely skilled with MDX and SSAS but know there are some semi-additive functions i want somebody to be able to pick a day and have the total value of only open orders.