I have master tables that I will be updating from our ERP system. Some examples I have seen take an approach of dropping a table in SQL server then creating it again before importing; some, and probably my choice, append and update; I have not seen an example where records are all deleted then the data appended afterwards. Of the three approaches which is generally regarded as best practise / most efficient?
Hi, hoping I can get a few view on a question I have relating to the above.
I am new to Stored Procedures and Triggers and I am trying to understand 'best practice' a little better. Here is my question: If I have a table that stores information, and when any field in that table is updated (and changes) I would like to inactive the row, prior to change and then add the change by way of a new, active row. This way I can see what it was before and that it's inactive, and what the active value is.
Hope this makes sense, if this is the wrong way to manage change history any suggestions would be appreciated.
A second question I have is as follows: If I have a table that stores a number, based on that number, what would be the best way to create new records in a different table that pulls from the first table. Where the number stored in table 1 represents how many times the record is to be created in the second table.
Thanks. If anyone needs more data, please feel free to ask, I will help as best as I can and appreciate any advice & comments that you can give.
I'm new to DB Mirroring, and I am trying to get it going in a test enviroment between two SQL 2005 Dev Servers. I have followed documentation that I have found but cannot get past a 1418 error when establishing a connection between the servers initially. Does anyone know of any good step by step guides that I could look at, in case I have missed something stupid.
Hi, I have the following two tables in MS SQL 2000 1.Products: ProdID intProName char(10) 2.Orders: OrderID intProdID intOrderDate DateTimeQuantity int I want to join these two tables to form the following result format: _________Prod1__Prod2__Prod3__Prod421/01/07__1______0_______1_____0___22/01/07__5______1_______2_____3___23/01/07__8______0_______1_____2___24/01/07__3______3_______4_____3___25/01/07__2______0_______1_____4___26/01/07__1______2_______6_____2___ So the first row would have the product name, Left hand column has the order date and then the sum of all the orders within Any pointers on how to achive this would be great.
I have an application that automatically reads a lot of data from a third-party application into my database, via XML. For example, I might read a couple thousand rows-worth of XML data, one row at a time in a foreach loop. To reduce the load on their server and database, I thought about putting a 2 second delay in between each of my automatic requests. Would this really help much, or is there enough overhead (setting up/tearing down connections, etc) with each request that it wouldn't reduce server load much anyway? Is 2 seconds enough? Too little or too much?
i have created my sql mdf file now i was tring to create sql script for the same how do i do that? script to create databse and tables in object explorer i right clicked on my databse file name
selecdted tasks and then generate script after that i clicked on next button
Am really interested to know how data mining project is implemented and what are the skill set required to do that. How the algorithms are applied the real world problem. I know how to use SQL Server 2005 BI suite. Please give some basic idea how to start with..
We are trying to use SSIS to import data from a flat file. The file contains a mixture of 2 different sets of data - first there are an unknown number of rows that define the column names, and then there are rows of the data itself that are delimeted and that contain the same number of columns as are defined above.
Anyway, the only way we can see to import this data is first via a Script Source task, since for a flat file source we'd need to specify the number of rows to skip, which is unknown at design time. If anyone knows any way of getting round this I'd appreciate your advice.
So in the scrip source task we are looping through the rows in the flat file and writing to 2 output buffers - one for the metadata and one for the data itself.
The problem is the Script Source task only lets you specify output columns at design time, and we don't know what columns to output until run time as the number of delimited columns is unknown.
So does anyone know of a way of either:
Adding output columns for a script source task at run-time OR Alternatively, we could output the delimited data from the Script Sourceas one long column. If so, what would be the best way of then splitting this up into individual columns?
Hi!I have 6-7 tables total containing subobjects for different objects like phonenumbers and emails for contacts.This meaning i have to do some querys on each detailpage. I will use stored proc for fetching subobjects.My question therefore was: if i could merge subobjects into same tables making me use perhaps 2 querys instead of 4 and thus perhaps doubling the sizeof the tables would this have a possibility of giving me any performance difference whatsoever?As i see pros arefewer querys, and cons are larger tables and i will need another field separating the types of objects in the table.Anyone have insight to this?
I want to log all changes made to a table (only updates, since there will be no deletes or inserts).
I would like to see the user who changed it, the date and time, fieldname, old value, new value. If more fields are changed during the update, than add more records into the logging table.
I have some data that is updated every day but I don't know when. I'm trying to make a solution that runs a SQL query to check if this data has been updated. If it has, I'll send the updated data with FTP as as text file.
How would you solve this?
My idea is to have 2 SSIS packages. - Package1: One runs at the same time every day (inserts any missing updates to a table) - Package2: One runs every hour to check the missing updates table, and runs Package1 if any update for a missing data is found. My only worries is if Package1 is running and at the same time Package2 decides to run Package1 then I could get into trouble if I'm using temp tables with the same name for the text file updates etc. Thank you.
I need to load a lot of Excel, CSV, ... etc. files. These files have hundreds of columns and I need to validate the data. Some are simple range type checking, some are more complex checking involve multiple columns.
There may have several hundreds of such rules. And I may need to let the program to automatically correct some invalid data in the future.
Where to implement it in SSIS? Or just load the files without any checking (all type to text), and checking using T-SQL?
I am about to upgrade my main database server (5 db's - largest 16Gb) from NT 4 SP 6a / SQL Server 7 SP3 to Windows 2000 SP 2 / SQL Server 2000 SP 2
I am planning to detach the db's, backup to tape a few times and then totally trash the server, rebuilding it with the new software, restore the db's from tape and the reattach the db's.
Any reason I should not use this method and can folk advise the best practice way of achieving this?
So I got 2 classes one I wrote to interrogate databases using normal ADO:Mine:SqlConnection myConnection = new SqlConnection(m_sConnectionString);SqlCommand myCommand = new SqlCommand(sQuery, myConnection);myCommand.CommandTimeout = 120; // 60 Seconds TimeoutmyConnection.Open();SqlDataReader result = myCommand.ExecuteReader(CommandBehavior.CloseConnection);return result;Microsoft WaySqlDatabase dbSvc = new SqlDatabase(m_sConnectionString);DbCommand dbCommand = dbSvc.GetSqlStringCommand(sQuery);return ((SqlDataReader)dbSvc.ExecuteReader(dbCommand));What's faster?My way:SqlConnection myConnection = new SqlConnection(m_sConnectionString);SqlCommand myCommand = new SqlCommand(sQuery, myConnection);myCommand.CommandTimeout = 120; // 60 Seconds Timeout// Use a DataTable – required for default pagingSqlDataAdapter myAdapter = new SqlDataAdapter(myCommand);DataTable myTable = new DataTable();myAdapter.Fill(myTable);myConnection.Close();myConnection.Dispose();myConnection = null;return (myTable);Microsoft Way:SqlDatabase dbSvc = new SqlDatabase(m_sConnectionString);DbCommand dbCommand = dbSvc.GetSqlStringCommand(sQuery);DataTable dtData = null;DataSet dsData = dbSvc.ExecuteDataSet(dbCommand);dtData = dsData.Tables[0];return (dtData);Comments? Ideas?Al
Hi, I work with a large team developing ASP.NET application that has a large database with over 50 complex stored procedures. It is proving more and more difficult and time consuming to centralise the development and update of the database changes and I was wondering if there were any best practises/tools that could be recommended. I have looked on the web for good articles and haven't found anything difinitive (except that Team Foundation Server is the way forward).. A brief background to the current process is that everyone develops on the same database, and then updates the stored procedure scripts in source safe (manually). Then when we do a new release someone builds a script of all the database updates and runs it. There are issues related to developers updating there stored procedures over other peoples and other concurrency. I am looking to move all the developers to start using local databases so that there work only effects them, but then this brings up problems of keeping all the local databases up to date whenever they get the latest source code. The only way I currently see is to build a database update program, that will run and update to the latest version. Surely this must be a common issue? Anyone have any good ideas/concepts? Also our setup is Visual Studio 2005, SQL Server 2005 and Source Safe 2005. Cheers, Andrew Thomas
hello i am just starting to learn sql and know the basics, but now im looking for a good book to learn some more. A book that covers stored procedure would be very useful. If possible a book with q and a would be very good because i feel this tests if u understand what was just explaned. but if there is a good book without this it is ok. All sugestions welcome
hii am using vs2005 for development of web application for reporting with sqlexpress05 as back end .later when project is ready for deployment i have to deploy the project on remote hosting server where i have limited access and sqlserver2000 database to use.i want to ask is there are any limitation or problem of sqlexpress while deploying it on remote sqlservre 2000.and should i have to to continue with sqlexpress as back end.is there any problems for using dynamic database connections(by using smart tags) other than programaticaly connecting database to asp.net ie by writing code.i am new in developmentplease guide me, please guide
hello all..i have make a searching, but is not good. my code like that:Public Class getall Public Function getitem(ByVal id As String) As DataSet Dim con As SqlConnection = New SqlConnection("Data Source=BOYsqlexpress;Initial Catalog=GAMES;User ID=ha;Password=a") Dim ds As New DataSet() Dim adapter As New SqlDataAdapter("select * from [item] where name like '%" & id & "%'", con) Try con.Open() adapter.Fill(ds, "user") Return ds Catch ex As Exception Console.Write(ex.Message) Finally con.Close() con = Nothing End Try ' Next Return ds End Functionand class my item in database is containning dragon ball 3, counter strikeif i insert dragon, it can display dragon ball 3.but if i insert dragon 3, it not display dragon ball 3.it should display dragon ball 3 .how should i change my code?thx...
So, as you can see, at first it appears that I'm after a LEFT JOIN - meaning that the grandparents don't need to have child records to be returned, but, then it turns out that I need INNER JOINS - to limit grandparents when I choose children.
I'm wondering if there's a better SQL Editor than MS Query Analyzer on the market? I like a lot of the functionality provided by QA but want extra stuff like you get in VB6: intelli-sense (sytanx prompting), auto-complete (CTRL+Space provides list of sp's and tables, etc.) plus any other time saving features.
I've tried a few products but nothing quite hits the mark. Is there a program you use and recommend I trial?
Folks, i've got a table with a column; ACCOUNT VARCHAR(30). All the values numeric though. (leave abt the datatype yet). The column is clustered indexed.
SELECT * FROM MYTABLE WHERE LEFT(ACCOUNT,3)='123' execution plan shows CLUSTERED INDEX SCAN.
SELECT * FROM MYTABLE WHERE ACCOUNT LIKE '123%' execution plan shows CLUSTERED INDEX SEEK.
How, why. Why doesn't the optimizer works good for the first query?
Hi, ive got some work to do on SQL queries, the scenario is below and at the bottom is my attempt at answering in the questions: Could somebody simply tell me if the answer at the bottow are correct, if not what I have done wrong.
A local company that produces machine parts has decided to develop an in-house database system. They have identified the following tables: -
tblOrders OrderNo, CustomerNo, Date, OrderTotal
tblCustomers CustomerNo, Name, Street, Town, County, Postcode
tblParts PartNo, Description, UnitCost
tblItems OrderNo, PartNo, Quantity, ItemTotal
Create SQL queries to produce the following: -
a) Details of all orders over £1000 sorted by customer number.
b) A list of all part descriptions and their quantities appearing on order 39
c) Delete all orders placed by customers in Wrexham.
d) Archive all orders placed by customer Clarke into a new table called tblArchive.
e) Increase the price of all parts whose description includes the word “washer” by 4%.
These are my answer, which im not too sure if they are correct. If any1 could tell me if there correct or not that would be great, thanks.
a) SELECT * FROM tblOrders ORDERBY CustomerNO WHERE OrderTotal > 1000
b) SELECT tblParts.PartNo, tblParts.Description, tblItems.Quantity FROM tblItems INNER JOIN tblParts ON tblItems.PartNo = tblParts.PartNo; WHERE OrderNo = 39
c) DELETE tblOrders.* FROM tblOrders INNER JOIN tblCustomers ON tblOrders.CustomerID = tblCustomers.CustomerID WHERE Town = “Wrexham”
d) INSERT INTO tblArchive SELECT * FROM tblOrders INNER JOIN tblCustomers ON tblOrders.CustomerID = tblCustomers.CustomerID WHERE Name = “Clarke”
e) UPDATE tblParts SET UnitCost = [UnitCost]*1.04 WHERE Description LIKE “*washer” or Description LIKE “washer*” or Description LIKE “*washer*”
I'm about 6 weeks into SQL and SQL Server (7) - I was wondering whether you could share your opinions about which language to use as a programming tool for developing apps for & with SQL Server. I'm choosing between C++ (Visual) or JAVA.
I already know C and the DB-Libe contains a lot of it but I'm kinda trying to expand some horizons. I'm ok with either C++/VC++ or JAVA but I only have time to learn (or be good at) one.
Any suggestions? (I'd like to hear what you think even if you say neither C++ or JAVA - maybe VB? What's easy and marketable is what matters most.)
I've been a SQL Server dba for 5 or 6 years now. With the upcoming release (eventually, I'm sure) of Yukon/SQL2005, I've read that it's important for DBA's to pick up one of the .NET languages - I've figured, I'll try to learn VB.NET - I've had a little exposure to it, and can usually figure out what's going on in VB code I've read - however, I seriously doubt I could write anything in it from scratch - I want to learn it in a bad way - can you all recommend any self-paced books that will walk me thru it? I've never had any formal training with it, don't know a class from a DLL....Thanks in advance for your help!!
Hi guysWe have a following problem. For security reasons in each table in ourDB we have addition field which is calculated as hash value of allcolumns in particular row.Every time when some field in particular row is changed we create andcall select query from our application to obtain all fields for thisrow and then re-calculate and update the hash value again.Obviously such approach is very ineffective, the alternative is tocreate trigger on update event and then execute stored procedure whichwill re-calculate and update the hash value. The problem with thisapproach is that end user could then change the date in the tables andthen run this store procedure to adjust hash value.We are looking for some solution that could speed up the hash valueupdating without allowing authorized user to do itThanks in advance,Leon
hello, for a new job i might have to learn SQL. i've neverworked with databases or SQL, so i'll need to learn. cananybody advice me on what would be a good book to learnfrom? i'm quite an experienced programmer, so it doesn'thave to be a dummies guide, and preferably not a bulky booklike the "SQL bible" or something.oh, one of my 'favourite' computer books of all times is"thinking in Java" by bruce eckel, to give you an idea.mike--not sure if there's a better group to ask these questions