Realtime Data Collection - How Do I Put SQL7 In Charge?
Feb 6, 2000
I have a product where we feed an SQL 7 DB data collected from Manufacturing. Presently, the Data transport Program is in charge of getting prepared data from machines and inserting into the DB. This design assumes SQL7 is always ready and able - which is not true due to customer queries or backups or etc. consuming resources. There is a low level buffer in system at manufacturing level if transport dies, but transport is ignorant of SQL distress, so keeps hammering DB's frontdoor. I'm looking for help in putting SQL server in charge of allowing data in - when resources are adequate. Seems I need a function that can determine server stress QUICKLY to forestall transport program and a buffer for records at the transport layer. Anyone know / done a system where SQL server CHECKS for waiting records or OK's an external program to send until told to stop? What indicates (reliably) low server resources? Anyone ever used MSMQ?
"Black Holes are proof SOMEBODY, SOMEWHERE really did have a particularly bad Y2K problem!"
Is SSIS a tool for extracting realtime data from staging to data warehouse? Realtime in my case can be loading every 15 minutes but no more than 30 minutes. I've a data warehouse which data refresh once a day and it worked fine. The data that I extract into the warehouse is from a Staging database which is realtime replication of multiple production databases. Once a day, I've to have replication pauses on staging for a couple hours to refresh the data warehouse. That's the only way so SSIS can pull the data correctly; if I've replication on while SSIS pull data, it will always copy less rows than it supposed to.
I cannot afford to have replication pauses every 15 minutes just so I can refresh data warehouse. Does anyone every have this problem? or any best practice how to do this?
Is it possible to use SSIS to synchronize the data between a Foxpro .dbf and a compatible SQL Server table on a near realtime basis?
I have succesfully created an SSIS package that will insert data into the SQL Server Table but this is only useful for migrating data. What I need is a way to insure that the data in the SQL Server table matches that in the .dbf on a near realtime basis.
Or is there a way to link from SQL Server to the .dbf (similar to an Oracle DBLink).
Not really sure if the is the right place to ask... But I was wondering... We have a data warehouse system setup where we host multiple DB's for differant customers. Now currently the way they are charging for the usage is like $0.005 a row for all major tables... IE not the tables we need for processing data for that customer. However we are looking at implimenting Sharepoint Services here and using this SQL Server as the backend SQL Server. Well I have been asked to see if I could come up with a better way of billing for the SQL Usage. Here are some of my thoughts and just wondering what you all do or how you feel about this:
For the Data Warehouse Part when billing by the per row instance someone with a 2 column table and 500 rows will be billed the exact same amount as a 50 column table with 500 rows but yet the 50 column entries would use more space.
Billing for hard drive space utilization could be tricky cause I would either A need to more all my staging type tables to another db for processing or make sure I wipe the staging tables out after each update.
Now I am not sure how Sharepoint works or how it saves data... but can I put each site I have under share point in their own DB? IE Client1 SP site is in the Client1 DB and Client2 SP site is in Client2 DB but both of them are running through the same Sharepoint Instance?
Would be it better to bill a flat rate in number of columns in a table and then number of rows in that table? I am not sure... Any advice you all can give would be greatly appricated.
What is the better table design for a data collection application.1. Vertical model (pk, attributeName, AttributeValue)2. Custom columns (pk, custom1, custom2, custom3...custom50)Since the data elements collected may change year over year, whichmodel better takes of this column dynamicness
Work on sql server 2008 r2, need recursively charge amount calculation process.want to write an sp, In my sp I need to calculate head sum base on parameter head and given amount: Picture describe my db input set, from the input set I need to calculate total charge amount on given head,
Input set 1 HeadAmountIsPercentHead PercentGiven AmountCalculated AmountWorking Sequence Utility10NoTotal400101 Sum10
I need to feed head office sql server with the data from regional servers. Servers are spread through all continents Data input done locally on Head office server as well and plus need to ship data from other servers. So clarify this - Head office server is not standby one. Mirroring is out of the picture, I think.. Initially, I thought ship a log every 15 min and restore on Head office server but is this going to create an issue for the local data processing?
i am trying to configure data collector on my server. so i configured data collector on server A and setup on server B. but the "Query statistics collection set" do not show me any data.
i right click and select "collect and upload now " item and get success result for this. but in the report i cant see any data...
also in the log page of data collection i see so many errors with messages like this:
"Failed to create kernel event for collection set: {2DC02BD6-E230-4C05-8516-4E8C0EF21F95}. Inner Error ------------------> Cannot create a file when that file already exists."
i tried some solution like disabling and enabling again, re-configuring, removing and configuring again .... but none of them work right.
It is possible that Data Collection can cause massive increasing MB/sec to tempdb ? I cannot find connection with tempdb and I set cash file, but on same disk.
Or it can be something different? Last two weeks what I checked was Read/Write MB/s to tempdb increasing progressively.
One time it was about 20MB/sec
After it was reseting and again 1MB/sec..
What I checked , External company which install SQL Server made one file for tempdb, next week or during breaktime(it will be possible), I would like make 8files next weekend work.
Now I saw that TempDB mdf was still increased, but using was just 8-10%
this is sanjeev, i have SSIS package, using my c# program i want to add one execute package task to this package's sequence container.
it is creating the new package with out any probelm. but when i opened the package and try to move the newly created exeute package task it is giving the following error.
the element cannot be found in a collection. this error happens when you try to retrieve an element from a collection on a container during the execution of the package
I Enabled Data Collection on one of the server and planned to make it as Centralised Management Data Warehouse I configured data collection on it and can view reports. Next, I went to other server and configured "Set up data collection" to use my first instance as the centralised Database. But the issue is I can only see reports of first server. Am I missing something here.
I did exactly as explained in this video [URL] .....
I have an aplpication whih collect different types of history data from an SQL database. On type of those data are hitory log information where amount of records can be really big.
What i would like to implement is a kind of paging mecanism for colelcting those data. fro example the first call will retrun the first 100 rows, then the next call would get rows 101 to 200 . etc....
I have a two node SQL 2012 AlwaysOn HADR cluster (v11.0.3412) with 4 availability groups configured. The AG groups are set to synchronous mode and the secondary is not readable (we do not want the synchronous replica readable so we do not risk any reads causing contention so we maintain fast performance).
On the secondary we are getting a persistent failure with the Data Collector job called Collection_Set_3_Upload. The failure occurs within the second job step. That job step is executing the following command:
dcexec -u -s 3 -i "$(ESCAPE_DQUOTE(MACH))$(ESCAPE_DQUOTE(INST))" The error message is as follows:
Log Job History (collection_set_3_upload) Step ID 2 Server CLUSTERNODE2 Job Name collection_set_3_upload Step Name collection_set_3_upload_upload Duration 00:00:07
[Code] ....
I know I can prevent this error message by enabling readable secondaries, but we do not want this.
I have tried stopping the data collection jobs and purging the cache directory but to no avail. It will succeed the first time then persistently fail again with the same message every time after that.
In addition, if I set the one failing AG group to readable secondary the job succeeds. So that means that 3/4 work fine, only this one is having an issue.
Last weekend many of our severs had a failed job "collection_set_3_upload". The error that occured is: "Violation of PRIMARY KEY constraint 'PK_ active_ sessions_ and_requests'. Cannot insert duplicate key in object 'snapshots.active_sessions_and_requests'. The duplicate key value is (2824333, 2015-10-25 02:54:49.7630000 +02:00, 1)."Last weekend we happened to go from summer time to winter time. i.e. the clock passed 02:00 - 3:00 two times during this night.
I.e. there is a bug in the Data Collector component that collects data for the Management Data Warehouse: it uses local time instead of UTC. I've created a Connect item to report it to Microsoft.URL...how do you get your process running again? the job will no longer run because it will every 5 minutes keep on trying to upload the conflicting data for the 2nd 2:00 - 3:00 period. I've only found one solution: get rid of all data collected but not yet uploaded.
You do this by stopping the Collection set (in SSMS go to Object Explorer -> <the server you want to fix> -> Management -> Data Collection -> System Data Collection Sets. Right click "Query Statistics" and select "Stop Data Collection Set").Then you delete the cached results from the sql server machine's harddisk. These cached results are in files located in a Temp folder on the sql machine itself, inside the AppData folder for the service account SQL Server Agent is running under. Usually it will be something like: c:Users<sql agent service account>AppDataLocalTemp.
Inside this folder delete all files that have 'QueryActivity' in their name. You'll loose all data collected since the start of wintertime, but at least your data collection process will work again.After this you can start the Collection set again by right clicking it and select "Start Data Collection Set". Every 5 minutes the data will be summarised and uploaded into your management data warehouse.
Posting Data Etiquette - Jeff Moden Posting Performance Based Questions - Gail Shaw Hidden RBAR - Jeff Moden Cross Tabs and Pivots - Jeff Moden Catch-all queries - Gail Shaw
I have a SSIS package with a Data Flow task. This task transfers the data from SQL Server 2000 to a table in SQL Server 2005.
I deployed and tested this package on the Test Server. Then put this package in a job and executed it - Works fine.
On the production server- If I execute the package through DTEXECUI, it works fine. But when I try executing it through a job- the job fails and it gives me following error:
Description: The external metadata column collection is out of synchronization with the data source columns. The "external metadata column "T_FieldName" (82)" needs to be removed from the external metadata column collection....
What I don't understand is, why are there no errors displayed when I execute the package through DTEXECUI.
How do I retrieve the connections (connection managers) collections from Custom Data Flow destination? ComponentMetadata.RuntimeConnectionCollection is empty. I would like to be able to access all the connections defined in the package from the custom data flow task.
I came across code in which it was possible to access the Connections collection using the IDtsConnectionService for custom task (destination). The custom task has access to serviceProvider, whcih can be used to get access to the IDtsConnectionService interface but not the custom data flow task.
I'm trying to edit the Expressions of a Data Flow task. This seems to happen when I rename some of the Data Flow components but not always. The error I get is:
Element "[ADO Net Source].[SqlCommand]" does not exist in the collection "Properties"
However, if you look at the XML, this property does exist. So I'm not sure why this should occur.
I'm using SSIS 2008 R2 with Visual Studio 2008 V 9.0.30729.4462 QFE.
<component id="1" name="ADO Net Source" componentClassID="{2E42D45B-F83C-400F-8D77-61DDE6A7DF29}" description="Extracts data from a relational database by using a .NET provider." localeId="-1" usesDispositions="true" validateExternalMetadata="True" version="4" pipelineVersion="0" contactInfo="Extracts data from a relational database by using a .NET provider.;
Is there a way to copy a table schema into a new table rather than having to generate a create table script?For example, I have table dbo.MyTable and I want to copy the schema to newuser.MyTable without a script? I know you can restore into a new database from an existing database so maybe you can do this?thanks-c
Can anyone point me to a white paper/discussion on the issues of gaining access to SQL2000 data from a SQL7 installation? I'm about to upgrade servers from 7 to 2000, but a few of the servers exchange data bi-directionally and the upgrade to both servers at the same time would be problematic; thus the need at some point to get at the SQL2000 data from the SQL7 server.
Hi! I need to load text data into SQL7. The tricky part (at least for me) is that this data may be duplicated. How can I load this data discarding the duplicated rows? AFAIK, the sql job (or DTS) will fail if a primary key is violated. TIA, Fabio Aneas
Firstly I am not a programmer; and this is the first time I have used this forum I am working with some friends who have been assisting. I am developing a product that is importing data into a sequel data base, and the data is being recorded in real time but posted at minute intervals. The data is then being exported to a local flat file again at minute intervals each row containing about 50 columns. Currently the whole file is being imported across the internet and it all works fine. I am using analysis services to look at the data every two minutes. As the file grows this time becomes longer. What I want to do is only import the latter portion of the flat file, say the last 144 rows there are 1440 rows per day. What is the most efficient way to do this? Duplicated records would be a disaster. Ideally the number of rows to be collected would be variable not fixed at 1440, these rows would be added to the main table that will probably contain up to 1 years worth of data i.e. 1440 rows * 365 days with approx 50 columns. Any comments to point me in the right direction would be appreciated. Steve
Hi, Using SSIS, I am successfully importing data from excel files into a table in sql server 2005. These excel files are ALWAYS open and are being updated from external sources i.e. third party tools. After doing alot of investigations, I have reached the conclusions as follows: When the excel files are NOT being updated, then the ssis package works BUT when the excel files are being updated, then the ssis package does not work. The error is:
[Excel Source [749]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Please note that the excel files MUST be open so that they get updated by the third party tool. AND, every few minutes i.e. 5 mins, my ssis package should import these excel files.
Question: How is it possible to load the excel data while it is being updated by external third party.
Hi, Using SSIS, I am successfully importing data from excel files into a table in sql server 2005. These excel files are ALWAYS open and are being updated from external sources i.e. third party tools. After doing alot of investigations, I have reached the conclusions as follows: When the excel files are NOT being updated, then the ssis package works BUT when the excel files are being updated, then the ssis package does not work. The error is:
[Excel Source [749]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Please note that the excel files MUST be open so that they get updated by the third party tool. AND, every few minutes i.e. 5 mins, my ssis package should import these excel files.
Question: How is it possible to load the excel data while it is being updated by external third party.
Ther eis an excel file. This file is open all the time so the excel sheets get populated by an external third party application real time. So the data inside the spreadsheets are constantly changing. This spreadsheet is only capable of being updated with data when it is open. And I would like to import this excel data on a certain interval into sql server 2005. Tried using the import wizard but it seems the import does not work if the source i.e. the excel file is open. Is there an alternative please? Thanks
while ( @x < 75000) begin insert into myTesttable values (@x) Select @x = @x + 1 end
2. While the script is still running, I want to know how many records are in the table. From the same query window as the script, I have run both of the following statements.
select count(*) from mytesttable witn (nolock)
select count(*) from mytesttable witn (tablock)
Instead of getting the answer immediately, they run only after the original script has completed. They seem to be "blocked". How can I get a near realtime count of the number of records in this table while the script populates the table?
I have a requirement to mirror our production database (or part/derivative of it). It is imperative that the mirrored database maintains a live copy of the production data. The data is financial data, so analysis of it requires latest prices, exchange rates etc.
One product I have looked at, and am very impressed with so far is DataMirror (www.datamirror.com). I have no idea of its price yet, as I'm waiting for a rep to contact me. In the meantime, does anyone have experience in this field with alternative products? Is there a free product that I should be looking at?
for example, if i was to change my smtp server within the rsreportserver.config file, does it automatically update in realtime the second i save it or do i need to refresh something?
any insight on this would be grateful before i do anything.
I've got an access front end containing various forms and sub forms, and we have just transferred the data into SQL, for storage, we can use the majority of the forms but, I now have a problem with updating the related data.
We have had problems updating the data, we need to close the form down in order to get amended data to register, just moving onto the next record give an ODBC error message.
Even using this method some details refuse to update, although the changes are initially visible on the form you cannot get them to transfer to the datafile.
the error message we get is
[microsoft][odbc sql server driver][sql server] the text, ntext, and image datatypes cannot be used in the where,having, or on clause, except with the like or is null predicates (#306)
we've checked the structures of the tables and the code in the form (it works in the old access back end).
I have backup of data from SQL Server 7.0 and now when i'm trying to restore it into SQL Express 2005, I'm getting following error......
-------------------- Msg 3154, Level 16, State 2, Line 1 The backup set holds a backup of a database other than the existing 'GOSLDW' database. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. --------------------
Here is SQL i'm using to restore database,
RESTORE DATABASE GOSLDW FROM DISK = 'C:sqlserverDataGOSLDW' WITH MOVE 'GOSLDW' TO 'C:sqlserverDataGOSLDW.mdf', MOVE 'GOSLDW' TO 'C:sqlserverDataGOSLDW.ldf' GO
Why i'm getting this error? Am i missing anything here?
I'm importing floor machine data from SQL7 into SQL2005 using SSIS. I import the SQL7 data into a SQL2005 master table and then attempt to match the import data with the data in a current table for either update or insert of new machines. The SQL2005 master table was imported from a SQL2000 database. When I run the first Lookup import, it does not recognize the PK matches between the SQL7 import and SQL2005 master and imports all the SQL7 as new machines. The first Lookup branches to a second lookup that checks for changes in the SQL2005 master. When I run the package a second time, the second Lookup treats all the records as updates when it gets to the second Lookup, but should treat these as found. Any suggestions as to why this process is not working properly would be appreciated. Is there a way I can embed a picture of the process from SSIS in this post? Thanks
Hi all, From the "How to Call a Parameterized Stored Procedure by Using ADO.NET and Visual Basic.NET" in http://support.microsft.com/kb/308049, I copied the following code to a project "pubsTestProc1.vb" of my VB 2005 Express Windows Application:
Imports System.Data
Imports System.Data.SqlClient
Imports System.Data.SqlDbType
Public Class Form1
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
Dim PubsConn As SqlConnection = New SqlConnection("Data Source=.SQLEXPRESS;integrated security=sspi;" & "initial Catalog=pubs;")
Dim testCMD As SqlCommand = New SqlCommand("TestProcedure", PubsConn)
testCMD.CommandType = CommandType.StoredProcedure
Dim RetValue As SqlParameter = testCMD.Parameters.Add("RetValue", SqlDbType.Int)
Console.WriteLine("Number of Records: " & (NumTitles.Value))
End Sub
End Class
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// The original article uses the code statements in pink for the Console Applcation of VB.NET. I do not know how to print out the output of ("Book Titles for this Author:"), ("{0}", myReader.GetString(2)), ("Return Value: " & (RetValue.Value)) and ("Number of Records: " & (NumTitles.Value)) in the Windows Application Form1 of my VB 2005 Express. Please help and advise.