Been handed a system which inserts millions of records per day into a single table with a composite 'wide' index (they tell me it's the only way to achieve uniqueness) and they run large financial reports from this 300,000,000 row table and of course we are I/O is an understatment.
I wish to split the data feed (inserts) workload away from the reporting tables however I need method to transfer the feed data into the report tables and control the volume of traffic. first choice is replication but is this sufficiently robust enough for a commodeties system? If not any ideas?
I'm working through the DM tutorials and I can't select a data mining technique when creating a new mining structure in BI Studio - BI Studio locks up totally. Have installed SQL Svr 20005 SP1 and components from feature pack. Analysis Server service is running, have set the properties in the DM project properties to the running instance of Analysis Services. Have read the various posts on the forum and have double checked my configuration - all seems as it should be.
All suggestions appreciated - very much looking forward to working with the technology.
Is it possible/advisable when transfering very large amounts of data from server to server to: trasnfer the data to a new table first second alter new table adding indexes, defaults, ets based on original table
if it is what flow item would be used to transfer/alter the indexes and defaults?
I'm very new to ssis so the more detail you can give the better.
In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?
I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?
I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file?
My vendor requires data to be sent in Excel format. Some of my tables have rows over 65,536 so I need to use Excel 2007 (Max of 1,048,576). Right now my data sits in SQL 2000. I am using MS SQL Enterprise Manager 8.0 to prepare the data. Is there some kind of add on or selection I am missing to use DTS to export from SQL to Excel 2007?Thanks in advance.
I have two database(MYDB1 , MYDB2) on two different server's(SERVER1 , SERVER2) . I want to create an store procedure in MYDB1 on SERVER1 and get some data from a table of MYDB2 on SERVER2. How can i do this?
Hello Members and Contributors and All the Peoples, Assalamualikum; I am a Univesity Student and Currently i am creating my own website on ASP. I have little bit of problem in Personal Message or Private Message Section. What i want is that my Message Box Field of PM Should render HTML tags.. like here , in this website, Message Box renders HTML tags.. also here it has option to see the HTML view... ALSO in my SQL Database the maximum length of my PM is 4,000... obviously that field will not store if it increases the limit. So how i increase my PM Field Length. Because i have put the option of replying the message... and old message will be shown as quoted message so all the text is very much important and what technique should use to STOP <a href> tags
I'm a newbie when it comes to the SQL Server side of things. I've been doingAccess on and off for years.I've got a form that is the main point of entry for all data.As a matter of technique, should I use one stored procedure to validate data(testing for uniqueness mostly) then another one to actually write the datato the tables?Does it make a difference one way or another to the performance?Do I gain any flexibility in my app dong it that way (Access Data Project)?Your help is appreciated.--Jake
Hi, we currently use binding to bind data from an SQL Query to a DataGrid, the problem is that we offer the choice to the user to choose the field that he want to see in the grid.
We currently include ALL optional field in the query even if they are not used by the grid, that make a realy big query with MANY inner and left outer join.
Whats the best techinique to dynamicaly construct my query with the field that the user choose.
All suggestion or webpage link will be appreciated.
I have an excel sheet that contain colummns as in a table in a sql database i want to transfer this data from the sheet to the table frombusiness logic code layer not from the enterprise manager by wizardwhat can i do?? ...please urgent
Questoin I am using Sql Server 2000. I have a table named Cities which has more than 2600000 records. I have to display the records for a specific city page wise. I don't want to compromise with performance. Can anyone has the idea? Waiting for your fruitful response. Happy Day And Night For All Muhammad Zeeshanuddin Khan
I need to write a query in an SP that returns either yes or no. Is it bad technique to use the return value of the SP for this? Should I use a scalar or output parameter instead? If so which?
Hi AllI'm creating some SPs and I've got a query which is inserting datainto a table with a a unique constraint:CREATE TABLE [fil_Films] ([fil_ID] [int] IDENTITY (1, 1) NOT NULL ,[fil_Film] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,CONSTRAINT [PK_Tfil_Film] PRIMARY KEY NONCLUSTERED([fil_ID]) ON [PRIMARY] ,CONSTRAINT [IX_fil_Films] UNIQUE NONCLUSTERED([fil_Film]) ON [PRIMARY]) ON [PRIMARY]GOWhen I insert data, should I check in the SP to see if there is anexisting record or simply catch the error if it already exists? Whichis the better technique? My current SP looks like:CREATE PROCEDURE spAddFilm (@Type varchar(50)) ASDECLARE @Count intSET NOCOUNT ONSELECT @Count = Count(fil_ID) FROM fil_Films WHERE fil_Films.fil_Film= @TypeIF @COUNT IS NULLBEGININSERT INTO dbo.Tfil_Film (Tfil_Type) VALUES (@Type)RETURN 1 -- OKENDELSEBEGINRETURN 2 -- ExistsENDGOThanksSam
I have created a SSIS package that transfer data from a Foxpro database to an instance of SQL Server 2005 Express. I used the wizard to create the package but I load and execute the package within a custom application that I have written in C#.
The way the custom application is intended to work is that the user can have the database in any location on the computer and all he has to do is specify the location then the application programatically changes the location of the source on the package that it has loaded and then execute it. When I initially run the package the first time (using the original path), it works fine and transfers the data. However, every subsequent time I run the application and specify a different path, the database on the SQL Server side gets created as expected but the data is not transfered!
Where am I going wrong? Do I need to save the package after I modify the source then reload and run it again or do i need to change something else in the Data Flow to make this work?
I am newbie to MDS of SQL,want to know how we can transfer tables from two different SQL Databases to MDS.Suggest me the steps to proceed with any examples.
Looking for some insight from the professionals about how they handlerow inserts. Specifically single row inserts through a storedprocedure versus bulk inserts.One argument are people who say all inserts (and updates and deletionsI guess) should go through stored procedures. The reasoning is thatthe developers that code the client side have no reason to understandHOW the data is stored, just that it is. Another problem is an insertthat deals with multiple tables. It would be very easy for thedeveloper to forget a step. That last point also applies to businesslogic. In my case, adding a security to our SecurityMaster can touch 1to 4 tables depending on the type of security. Also, certain fieldsare required while others are set to null for depending on the type.Because a stored procedure cannot be passed datasets but only scalarvalues, when you need to deal with multiple (i.e. bulk) rows you arestuck using cursors. This post is NOT about the pros and cons ofcursors. There are plenty of those on the boards (some of themprobably started by me and showing my understanding (or morecorrectly, lack of) of the way to do things). Stored procedures alsogive you the ability to abort and/or log inserts that cannot happenbecause of contraints and/or business rule failures.Another approach is to write code (not accessible from outside thedatabase) that handles bulk inserts. You would need to write in rulesto "extract" or "exclude" rows that do not match constraints orbusiness rules otherwise ALL the inserts would fail because of one badrow. I guess you could put the "potential" rows into a temp table.Apply your rules to the temp table and delete / move rows that wouldfail. Any rows left can that be bulk inserted. (You could also use therows that were moved to another temp table for logging why theyfailed.)So that leaves use with two possible ways to get data into the system.A single row based approach for client apps and a bulk based forinternal use. But that leaves use with another problem. You now havebusiness logic in TWO separate areas. You have to remember to modifycode or fix bugs in multiple locations.For those that are still reading my post, my question is...How do you handle this? What is the approach you take?
I've been asked to turn our single-user database system into a multi-usersystem. At present, each user has a copy of the MSDE on their desktopmachine and uses our program to access it. In future, we would like tocentralise our MSDE instance and allow multiple users to access it. Inorder to facilitate this, we are going to only allow one user write accessto the system at a time (I know, its a kludge, but the system was neverdesigned for multiple users in the first place).I have a single, simple question this being the case: can I update a single"read-only" bit field in a table of the database in order to flag to otherusers that the system is in read-only mode in a way that avoids concurrencyissues? ie. does an "UPDATE" query lock and unlock? ( I suspect the answeris yes! ). If anyone else has experience of these things, I would alsoappreciate some tips on how best to proceed.ThanksRobin
Hi folks, I have implemented this technique to simplify SCD loads and also to maintain consistent units of work during update/insert of a single row. Wanted to get your feedback on this technique: performance, transaction issues, etc.
I send all rows to an OLE DB Command that performs both update and insert for a single row in a single command:
Code Snippet UPDATE PROPERTY SET ORD_TERM_DT = ? WHERE ACCOUNT_NBR = ? AND ORD_TERM_DT = '9999-12-31 23:59:59';
This way I can guarantee that if the termination (update) of an old row (say, row 10) succeeds, but insert of the new row 10 fails, that it will roll back. Otherwise, row 10 will get terminated without being replaced with a current record...
Performance: load of 7,734 changed records into a table of 6.8M existing records was roughly 8 seconds. The data flow task container TransactionOption = Required.
I have a SSIS project where I am transferring data from DB2 table to SQL Server table. There is a column called REC_ID which I need to encrypt before we store it in SQL Server. Now, SQL Server has buildin encryption functionality and we need to use that as there are views that will decrypt this column and give data to authenticated users.
So, the question is, is there anyway that I can encrypt the column data in my SSIS package using my target SQL server database key and using SQL server encryptbykey function while transferring?
I'm looking to remove hundereds of legacy procs, triggers, functions etc in a DB. Is there a DMV, sys.<something> command or technique that will tell me the last night an object was accessed or executed?
First of all I would like to announce that this is my first time I post here.. However, I'm pretty sure that I'm in the best place to ask what I want. To cut the story short, I'm querying SQL database on a remote machine and having the result saved (mapped) to another table on another database on the same remote machine. The thing is the destination table was empty before the query was run the first time. I have been searching for some smart way so that when I modify the source tables that my query is based on, it doesn't affect except the modified rows. In other words, it should be like if the row is already there, do nothing. otherwise, it updates the existig record. else, it's a new record and it's inserted. I think what i need will include some coding for sure, yes? I don't know if i'm clear about the requirement or not though! but I know that you are experts and can direct me. Waiting for your valuable replies.
We have this SSIS package that serves a ETL role, but sometimes when the package crashes we wanted our support personnel to be aware of it, so we did implement an Event handler Package, executable is the package and Event Handler on taskFailed. The idea is if any of the Tasks fail, then trigger the event handler.
Now recently I found out while debugging the package, one of the DFT's failed, and the control went to the Event handler. But thenrest of the Task's are executing and the package finished successfully.
How do I ensure that if at all some Tasks fail, the control should invoke the task at the event handler and end the package run.
One more query, what is the ideal and best way of Event handling to be followed in case of ETL jobs ?
Hi,Env: MS SQL Server 2000DB Info (sorry no DDL nor sample data):tblA has 147249 rows -- clustered index on pk (one key ofdatatype(int)) andhas two clumns, both are being used in joins;intersecTbl4AB has 207016 rows -- clustered index on two fks andthis intersection table has six colums but only the two fks are beingused for joins;tblB has 117597 rows -- clustered index on pk (one key ofdatatype(bigint)) andhas four columns but only its key are being used for joinsA complex query involving the above the three tables includes innerand outer joins, aggregate, sorting, predicate, math function, derivedtable etc.On the first run, the query takes about 4200ms to finish;after some research on index optimization, I provided some index hint,then the query runs atabout 3000ms. (That was yesterday).Just now I realized that MS SQL Server 2000 is quite "intelligent", Ithink it saves search termsinto cache because the second time search of the a same term is muchmuch faster. Now, a couple of questions:a) if I construct a long long list of "common" terms andprogrammatically let the sql server to cache them would it speed upthe overal query performance in my case? (Or it may depend on thequality of the "common" terms?) and if your answer is yes (supposedlyyou've been there, do you have to know where I could find such a"common" term list, for everyday life or the general public?)b) what other techniques out there to speed up the above describedquery? Bring it down to 1000ms would be most desirable.Thanks.
there is any service or technique to reflect database changes to form?
am mean if there is two people update same data and one of them
does the update i need (search) on service in sql server or .net that(or triger) that can automatically reflect changes to the form control that display data automatically when data changes before the another persone make second aupdate, so he can see the update made by the first person pefore he make another update.
Hi everyone,I need some help transfering my database(SQL2005) to the host's server.In SQL 2000 I had the Management tools > DTS or ImportExport tools.SQL2005 lacks this, all I've got is Express Manager > can't connect to multiple databases.Can connect to one database and has a screen to excute TSQL commands.I don't want to have to recreate the whole database on the server.....This would be crazy... 150 stored procedures and 30 tables. Not to mention all the individual settings for each column.I know there is SQL Studio Management tools for the full version of SQL 2005. Anyone got any idea's ?Any input/suggestions is greatly appreciated.Thanks,JShep
can any body can tell how data trnsfer from oracle database to sql server database. what is the methods i can use? is it required to write scripts. if scripts required, can anybody can give me sample scipt.
SQL v7.0 - DTS Tying to transfer all object info, sp's, constraints and data in to a fresh new database in the same instance, but it's hit and miss when it comes to getting everything transferred. Followed step by step instructions (GUI/Scripts) and sometimes the table structure will transfer without PK's, as well as some data and if you're lucky, some constraints. Stored proc's have a tendency to show up 20 minutes later. I'm testing this on a machine with IE 5, SP2 and fresh new Dll's (just in case something was corrupted). Is there a priority as far as what to transfer first? For example, tables only 1st, then constraints on down to records parent to child? I am assumming you should be able to transfer all at the same time since it's built in to the utility. Or, does it come down to something in the schema? If anyone out there has experienced something similar and found a solution, please advise.
hi, I am having two servers one is 6.5 and another is 7.0. I would like to export a one database from 6.5 to 7.0. when i was trying to transfer the data it is saying u have to upgrade the server of 6.5. pls tell me how i have to transfer data from 65 to 7.0 without bcp.
Hi, here we have 5 servers,application server,datawarehousing server,web server,QA server and production server. so i suppose to transfer(by using DTS) the total data(GDWReports) from production server to QA server. so actually i exported that data from production server to one destination by using export wizard and i tried for import that data to qa server by using VB script.but i got one error that "login failed like that". so how to import the data. pl give the suggestions asap. thanx Janreddy