I'm currently trying to retrieve results from a large dataset, there are over 45000 records and I need to use them all to peform counts etc. I have set up views, but my page is still being returned slowly, is there anything I can do to speed this up?
We are trying to limit are query that returns items from our database. The query currently returns 32,000 records. We are trying to figure out an effecient way so we can request the 1st 50, or the 3rd 50, or the 5th 50 to display to the screen. We dont want to return the entire 32,000 then limit whats displayed to the screen in ADO. We want the select statment to only return 50 at a time. Any suggestions?
I have an SSIS package (SQL 2005 SP2 and Visual Studio SP1) that does the following:
OLE DB Source --> Conditional Split --> OLE DB Command #1 --> OLE DB Command #2
The source reads from database A. Each row is variable-width and up to several KB wide, including two ntext columns.
Command #1 executes a stored proc in db A, using a bunch of inputs and two output parameters.
Cmd #2 executes an update in db B, using the two output params from cmd #1 as inputs.
When the rowset size is small, around 500, everything works fine.
However, when the rowset size is larger, around 5000, SSIS hangs when trying to execute cmd #2. The profiler shows that none of the cmd #2 updates are ever executed. No error messages are produced, and the connection never times out -- it just hangs forever.
If I replace the cmd #2 updates with a simple select, everything works fine. If I replace it with a stored proc that does an update, it hangs.
The work-around I came up with was to create a new table in db B, and do inserts into the table, but unless I'm missing something, this still seems like a bug...
so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.
Example: --enable async cursor exec dbo.sp_configure 'cursor threshold', 0; reconfigure; declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int; --example of giant result set set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';
[code]...
Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.
I'm currently working on a BI architecture for a customer, and consider to propose the Power BI data catalog as a data distribution layer. The customer will use Power BI, but also has other BI tools.
Are data sets in the data catalog available to other clients than Power Query alone? E.g. are there OData feed endpoints available? If not, what would be the best way to give other tools access to the data?
Hi, I was wondering how it is posible to join three data sets from different data flows into one txt file. Let's explain a little more:
I have 3 dataflows. Each of them connect to sql server and and by a SQL command, they bring data into SSIS.
Each SQL command differ between them. So each data set have different columns (they dont have the same format). Also the amount of columns differ between each one.
What I need is to join the three data sets into one txt file. How can I do this? It is posible to join them with different data set formats into a txt file?
Is this the best way to join different data? It is better to use as many OLE DB Sources are needed instead of different data flows? Thanks for your help!
I am trying to query one table and get two different timeperiods of data, I am summing monthly totals to provide a running year total, but I also need last month's total in a seperate column. This is what I have so far but the subquery makes me group it which provides duplicate grouping.DECLARE @LASTPD AS INT SET @LASTPD = (SELECT MAX(LASTPERIOD) FROM TABLE) SELECT NAME, POST_PD AS [MONTH],SUM(CHARGE_AMOUNT) AS MONTHLY_$, LASTMONTH.LAST_MONTH,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME WHERE POST_PD = @LASTPD AND TABLE2.NUM= 539 GROUP BY NAME) AS LASTMONTH INTO #TEMP_SAFROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLEWHERE TABLE2.NUM = 539 GROUP BY NAME, POST_PDORDER BY NAME, POST_PD SELECT NAME, LAST_MONTH, CAST(SUM(MONTHLY_$)AS DECIMAL(20,2)) AS YEARLY_$ FROM #TEMP_SA GROUP BY NAME ORDER BY NAME
I would like to match two sets of data. I have setup a view of data that contains a group of customers and their details. I want to view this data, but also find these customers in another table based on matching their surname and date of birth, then retreive the information stored on these customers from the second table.
Does anyone have any suggestions how i would go about doing this?
Thanks in advance Humate
quote:Originally posted by Michael Valentine Jones
It takes real skill to produce something good out of a giant mess.
I have the following situation. One set of data has 274 rows (set2)and anther has 264 (set1). Both data sets are similar in structure aswell as values for both of them were extracts from the same parenttable. Hope the info would substitute DDL. I need to find the "gap"rows between these two sets.Attempted to run a query likeselect count(*)from set2where not exists(select *from set1)did not yield what I desired. What else to try?TIA.
I have two tables - one with sales and another with payments against those. The payment may not match the exact amount of sales and I have to use FIFO method to apply payments. The payment month must >= sales month.
How can i write a query to do this? Examples are as below.
Table 1 Sales Sale DateSale Amt 1Jun-141200 2Oct-142400 3Dec-14600 4Feb-1512000
I've seen that sometimes is better to split the table into a test dataset and a training dataset, and I'll appreciate if anyone can explain why is this...
Is there a way to put more then one data set in a list. I have a report that has three data sets with three tables. Now i want to show each report by Region, per page. So you can view the same stuff for each region seperately, instead of all together. Is there a way to do this. Where i dont have to go back in my code, and find a way to link everything together, so its in one data set .
I have designed a contact manager with Data Grid Control bound to a Data Set.
When the application closes, data from Data Set is written to XML file and when application opens, data from XML file is loded into Data Set and is show in Data Grid control.
Contacts in my application can exceed over 1,000 So, Is Data Set capable of handling lot of data very efficiently in memory?
Hello, I am using existing code, which I am trying to convert from using MS Access to SQL Server 2005... The data set works fine with MS Access database, however when executing with SQL Server 2005 as data source, it generates the following error: "..The data types ntext and nvarchar are incompatible in the equal to operator..." in this line: count = adapter.Update(dataset); Not sure what should I look for since data sets are new to me.. Where should I check to fix this problem? I have noticed that the table has two columns with nvarchar...
I have two queries that generate two different datasets. One is a count of memebers, and the other is count of admits. I need to generate a calculated field from the two data sets called admits per 1000, which is essential the count of admits/counts of members *12000 I was able to calculte admits per 1000 easily in excel, however I need some insight on how to do is SQL.
Below are my queries from the two datasets.
MemberMonths dataset: Select factMembership.BusinessUnitCode, EffectiveCCYYMM, ISNULL(count(Distinct MemberId),0) As MemberCount From factMembership
[Code] ....
Admits dataset:
SELECT Factadmissions.BusinessUnitCode, factAdmissions.AdmitCCYYMM, ISNULL(Count(AdmitNum),0)As [Count of Admits] FROM factAdmissions
What I need to be able to do is somehow select based on a day, the total value of open orders. Â I have tried to do this in the database but it becomes fixed and quite cumbersome (this is a simplified example in reality i have line information and line component information).I am not hugely skilled with MDX and SSAS but know there are some semi-additive functions i want somebody to be able to pick a day and have the total value of only open orders.
I was wondering if anyone has ever written a chart with multiple datasets.
I need to be able to show sales dollars inflow by order date on one line and on the other needs to be sales dollars delivered by delivery date. So the all sections Values, Category groups, and Series Groups in the chart will be from 2 different datasets.
I have tried but it will not allow aggreates in the series groups.
Hi there everyone. I have a stored procedure called “PagingTable� that I use for performing searches and specifying how many results to show per ‘page’ and which page I want to see. This allows me to do my paging on the server-side (the database tier) and only the results that actually get shown on the webpage fly across from my database server to my web server. The code might look something like this:
The problem is the stored procedure actually returns two result sets. The first result set contains information regarding the total number of results founds, the number of pages and the current page. The second result set contains the data to be shown (the columns specified). In ‘classic’ ASP I did this like this.
'Open the recordset rsItems.Open strSQL, conn, 0, 1
'Get the values required for drawing the paging table intCurrentPage = rsItems.Fields("CurrentPage").Value intTotalPages = rsItems.Fields("TotalPages").Value intTotalRows = rsItems.Fields("TotalRows").Value
'Advance to the next recordset Set rsItems = rsItems.NextRecordset
I am trying to do this now in ASP.NET 2.0 using the datasource control and the repeater control. Any idea how I can accomplish two things:
A) Bind the repeater control to the second resultset B) Build a “pager� of some sort using the values from the first resultset
A DB2 store procedure returns two data sets, when executed from SSMS, using linked server. Do we have any simple way to save the two data sets in two different tables ?
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A: (name phone house cost of house,zipcodecountystatecountry) -a view will later split this large varchar string based column b: is the source filename of the data load (varchar 256) ....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
hello guyshere is my problem:i am developing a asp.net web app in .net 2.0. i have some sensitive data in my database. which is encrypted using DES ( with some key which is only known by the top level authorities ). now there is an option of changing the secret key. on changing the key the sensitive data has to decrypted using the old key and then again encrypted using the new key. Now if the no of records increases i am afraid that it might take a longer time and the application might look as it got hanged. guys i have no clue on how to do this. if you guys have any idea on how to implement this please let me know. any help would be appreciatedVignesh
I am trying to import data into SQL Server 7. The table will be 700-800 columns, and the data will be about 150,000 records at a time. The data source is flat file.
First I create the table using a database schema, and secondly I would like to populate the table. The problem is that most of the data is numeric, and to be used for statistical analysis.
So far I have tried Bulk Insert, bcp, and dts. DTS is the only method that has worked in any way, shape or form, but that requires importing each column as a Varchar. Importing to my pre-created table doesn't work, because it is interpreting some of the source columns as character data and refusing to insert them into an int field. Bulk Insert and bcp both give error messages, and I am wondering if that is because of the size of the insert statement that is required to handle so many fields.
For the moment I am just trying to import the data in any way, but eventually, it will have to be run as an automated process, with the table structure probably needing to be altered as well.
Any help/suggestions would be very greatfully received.
I have a web site that allows user to enter large strings into a database (comments, etc). What is the best way to do that? Right now I have them limited to 25 characters and the data type is varchar. Is there a better way?
when to use table variable and temp table. i told the interviewer that when rows is less like hundreds or thousand then use table variable else use temp table.After that he asked that what do u mean by less data or thousand rows may be there are multiple columns involved with that less rows and make a huge data set.
I want to build a system that will have about 1 million rows in atable in sql server database.I am using this for a web application andaccessing it via JDBC type 4 driver.But display 20 records at a timeonly using pagination(as in google).What will be the best way to goabout this.