I need to compare two tables and output everything that doesn't match. The tables are joined by the "domainname" column, and I need to output everything in both tables where the "domainname" doesn't match. Any ideas?
I am using SQL Server 2005 Express Edition. One table contains field docdate having format dd/MM/yyyy hh:mm:yyyy. Here hours & mins are also required.
I am working on VB6.0. User will input date on form in format 'dd/MM/yyyy'. I just want to check this with SQL Server date considering only dd/MM/yyyy.
Language : VB.NET Database : MSSQLserver2000 i have developed a tool to compare two databases, but it is taking long time to compare. First it will take the whole table into main memory(order by primary key). Same procedure for table2 of second database. Then it will start comparison by taking row by row. i tested with one third party tool, which is taking only 2 min to compare 800000 records of a table. My tool is taking 1 hour,40 mins to do that. Is there any other optimized method?
How could I tell the performance difference between two queries:
One is: select * from table where Lower(colomnname) = 'value'
The other is: select * from table where colomnname = 'value'
Basically the difference is in lower() function, how much this function will affect the query performance. Is there a formal way to test it out, or by any logic. Thanks, Mike
I am trying to compare developer edition and Express edition. I almost purchased the developer edition and wanted to compare it with other editions. no info to be found. Would greatly appreciate it, if someone could direct me to a resourse that shows this comparision.
thanks
ps.The product comparision page doesnt show developer edition.
1) Crystal Reports provides extensive parameter support. The types of parameters supported are: single value, multi-value, and range value (e.g. Start date to End date), or a combination of all three. 2) SSRS parameters only support entry of a single value. It doesn't support multi-value parameters or range parameters. For example, the user can't be presented with a list of Employees and select more than one for reporting on. Doing so requires writing custom code and writing more complex SQL queries
My question is concerned with the three columns below (customerID, RepairDate, CompletedRepair (Yes or No). The column name "CompletedRepair " is blank initially. I need to update the CompletedRepair column with this logic below:
- A customer comes to our store to fix their car, if we fix their problem on the first time and they don’t return later for this same issue, then the •CompletedRepair column = Y
- If a customer needs to come back to our store to re-fix the same issue within 7 days windows based on the RepairDate on the previous transaction then •On the last return transaction: CompletedRepair = Y (example: RepairDate =6/12/2006) •On all previous transactions: CompletedRepair = N (example: RepairDate =6/8/2006, 6/9/2006, 6/10/2006)
- If a customer needs to come back to our store to re-fix the same issue but out of the 7 days windows based on the RepairDate then •On the last return transaction: CompletedRepair = Y (RepairDate =6/12/2006) •On the previous transaction: CompletedRepair = Y (RepairDate =6/1/2006)
Every time customer comes to for car repair shop for a new issue or an old issue, we create a new repair transaction in our SQL db. The update on the "CompletedRepair " column will be run every day. Today's records will be run against with last 7 days records (based on Repair Date) to check when customer has been really fixed: the last fix counted Y, the previous fix counted as N but comparison in only 7 days. In other words, a repair today is considered as a completed repair when comparing with last 7 day repairs but it might become not a completed repair if this same customer would come back within next 7 days for the same issue.
The CompletedRepair column is dynamic column and is updated daily by using the logic above.
Below is the expected outcome after we update the Completed Repair column:
CustomerID Repair DateCompleted Repair
ab1 06/12/06 Y ab1 05/28/06 Y ab1 05/18/06 Y ab105/15/06 N ab1 05/12/06 N
Initially 5/12/06 had Y, when 5/15/06 transaction came, it took the Y and made the 5/12/06 become N. The 5/18/06 transaction did the same to 5/15/06 transaction, made itself Y and converted 5/15/06 into N. The 5/28/06 is Y because comparing with 5/18/06, it is out of 7 days window. The 6/12/2006 is Y because comparing with 5/28/06, it is out of 7 days window.
ab2 06/02/06 Y ab2 05/28/06 N ab2 04/19/06 Y ab2 04/14/06 N
The 4/14/06 transaction initially was Y, it became N when new transaction on 4/19/06 came. Same thing with transactions on 5/28/06 and 6/2/06
ab3 05/11/06 Y ab3 03/29/06 Y ab3 03/23/06 N ab3 03/12/06 Y
The 3/23/06 was Y, when new transaction on 3/29/06 came, it became N and the new transaction is Y. The 5/11/06 is Y because comparing back to 3/29/06, they are out of 7 days window.
ab4 05/11/06 Y
This ab4 customer came to fix her car only one time and don't come back. We supposed the fix was sucessfully and so we mark the CompletedRepair as Y.
I think that I would need to use SQL cursor or case statement for this but I really don't know how to start. Please advice and help me out. Any ideas and suggestion are really appreciated! If you need more information, please let me know!
The installation SQL 2008 R2 Management Tools on a Windows 7 workstation fails with the error, The specified account already exists.
Final result:Â SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup.
 Exit code (Decimal):          -2068052700  Exit facility code:           1212  Exit error code:              1316  [code]...
I am running SQL Server 2005 x64 Enterprise under Windows 2003 x64 Enterprise. My current backup strategy uses T-SQL jobs run by SQL Agent (writes out *.bak files) and then I have an Integration Services job that copies the *.bak files to our NAS device. I have performed a restore without issue. The jobs are all automated every four hours via SQL Agent. Is this a sound strategy or are there additional benefits to using 3rd party tools? If so, what are the advantages and which tool provides them?
I am working on a technical design of data integration ETL package which will be moving data from SQL Server Source to DB2 destination. I currently have two options, when moving data to DB2(IBM AS400). I can call a AS400 Stored Procedure, and pass in the data to the stored procedure, and perform the insert processing within the AS400 environment or I could do inserts from SSIS in a DFT and write individually to AS400 tables. My question is from a performance and good practice perspective, which method should I move forward with. I need a possible list of pros- and cons when using AS400 Sproc vs using SQL within SSIS? I would really appreciate response from individuals who have done something similar in the past. Thanks a lot and I am really looking forward to responses.
Login failed for user 'TOSHIBA-USERASPNET' I know that the file persmission for the web application have to include aspnet, so i keep resetting the folder permission for aspnet in file manager, but the login failed keeps coming back every day or two problem is after working with VS05 Pro, SQL Server Management Studio CTP, somehow the aspnet persmission get changed, use alot of sqldatasource wizards and often there is a conflict/hang between the datasource wizard and the need to have the mdf in a dettached state within VS server explorer, not sure but the procedure to fix this seems to be to reboot, detach and re-attach the mdf in the Sql server Studio tool, re-apply the aspnet file permission on the web app folders (wonder should i be doing this in IIS instead), make sure the mdf within server explorer is detached, the it works anyway, getting real tired of the resulting delays and design time derailment, clues greatly appreciated, thanks sometimes i can use View in Browser when in VS05 form view and i wont get the aspnet folder permission error and other times i do. last thing, is it a bad idea to give aspnet full permission for the entire web applicaiton??
I have the following insert statement in place:Insert WPHPayments(constituentID, constituentName, campaignYear, fundID, fundDescription, dateAndTimeEntered, amount)Select gt.constituentID, gt.constituentName, gt.campaignYear, gt.fundID, gt.fundDescription, gt.dateAndTimeEntered, gt.amountFrom GTPROCENTERFUNDPAYMENTEXTRACT gt, WPHExtractWhere gt.constituentID = WPHExtract.wph_constIDI want to insert all of the values that are in the GTPROCENTERFUNDPAYMENTEXTRACT table that have the same constituentID that as the records in the WPHExtract table. Am I just missing something becasue the syntax is showing that everytihing is correct however there is nothing comming back in the result set. Thanks in advance everyone. Regards,RB
Is there - apart from the notorious RESTORE HEADERONLY - an tool which is able to tell which SQL Server version created a specific BAK file? I'm looking for a tool that can be used w/o an available/running SQL Server installation.
Alternatively, is there any documentation about what is read with RESTORE HEADERONLY so I could write a tool myself?
Where would I find the version "bytes" in a BAK file?
I have 2 test database identical in size and table structure, only one resides on filegroups and one resides on a single file. From everything I have heard filegroups are suppose to improve performance. I stop and start the server before each test to clean out cache. When I run the filegroup test initially the first run averages 20 seconds slower then the database sitting on the single file. However on consective runs there is a significant improvement in performance with filegroups verses single files. I have the indexes sitting on a separate filegroup and 2 additional filegroups to spread the tables across. Does any one know what would be causing the performance to degrade on the initial run of the test. (The test by the way is a stored procedure that runs a select statement against each table).
i'm writing a app in c# and have to store Trees in a Database.
I'm working with Datasets for the exchange between the DB and the App.
The trees have the same options like the windows folders. If u delete a node, all subnodes should be deleted too.
But something a Foreign Key from ParentID references (Id) with the delete-Rule on cascade seems not to be possible, because of multiple cascade Paths or cycles. Do i have to add some xtra constarins:
Not Possible:
create Table tree ( Id varchar Not null, ParentId varchar Not null, Constraint pk1 Primary Key (Id), Constraint fk1 Foreign Key (ParentId) references tree(Id) On Update Cascade On delete CAscade )
Do i have to write triggers, which delete The subnodes too and set the Update-/deleterulr on NO Action
Does anyone know any good links for SQL tree structures and example queries and stuff... I cant really find anything part from the standard example of emplyee, boss, salary which explains how to create the tree table...(dun dis bit) I did notice a book but I live in a little village so cant go get it till wekend?
I'm desperate, reli need to work out how too do this.....
(Wow; surfing this site has really illuminated what a lowly hack-programmer I am to this field of SQL and relational processing :S )I am creating a temp table, doing an Bill of Material explosion for a single Order Line Item.(note: SQL Server 2000)I used blindman's "accumulator method (click here) (http://www.dbforums.com/showpost.php?p=6239109&postcount=6)" to generate the entire potential BOM tree.So; step 1 works wonderfully! :beer: Cheers blindman!Now; I want to remove unwanted nodes (and all their children) from the temp-table. I have a bunch of functions (0 to 1 for each node) that return a Yes or No. If "No", then flag-to-eliminate immediately that branch and don't revisit anything on it (and therein lies my problem). This will leave me with a temp table of only valid nodes.Since this is recursive, and since it will involve Dynamic SQL (the function name varies), all I can come up with is using a Cursor in a WHILE loop. Even at that; since a CURSOR is point-in-time (ie: values don't change once selected), I'll have to re-check the current temp table values (or create of 2nd temp table of only deleted nodes and repeat a SELECT with NOT EXISTS in it, hmmm).Since blindman's method generates a table ordered by level, the sequence of processing is pre-determined - unless I can re-order it into a more traditional hierarchy (1 entire branch at a time) and number the levels, in which case the cursor could just skip to the next branch of equal or higher level.Note: The thought does occur to me I could have an intermediary function (static name) that in turn does the Dynamic SQL. These functions contain the Business Logic that looks at a myriad of column values and relationships in the database and there's no one-size-fits-all decision tree so Dynamic SQL is necessary.The max cursor size will be maybe 300, and on average 100. Number of levels will normally be 3 or 4, but conceivably could be up to 10. Given the average 100 potential components/sub-assemblies, the final assembly will be about 30. As a periodic background process; it will do 3,000 Order Line Items a day, so I'm figuring 1 second response time per build is adequate (ie: the user's not waiting on it so it doesn't have to be blinding fast) - however why waste?Anyhow; I thought this might be a fun problem for some Data Structure genius who wants to give a lesson in Relational Programming.Thanks for looking.Here's what I have so far:CREATE PROCEDURE dbo.sp_ExplodeTest1 (@recID int = 1) AS/* tbTestH is a table containing an assembly hierarchy. Assemblies with no parent are Builds.Assemblies with no children are Components.It's columns: MyID int, ParentID int, (other descriptive columns)*/declare @t table (TempNodeID int identity(1,1),MyID int)-- Seed the tree with the Build's ID.insert into @t (MyID) values (@recID)/* This populates the temp table with the entire Assembly for the given build. It is Ordered By the level in the assembly. Number of assembly levels is infinite.For example: Level 1 = the Build. It comes firstLevel 2 = all parents are level 1. Is next (no particular seq)Level 3 = all parents are level 2. Is next (no particular seq)etc.*/while @@Rowcount > 0insert into @t (MyID)select tbTestH.MyID from tbTestHinner join @t rList on tbTestH.ParentID = rList.MyIDwhere not exists ( select *from @t CurrentList where CurrentList.MyID = tbTestH.MyID)-- now to display the results so far.select t.*, tbTestH.* from tbTestH inner join @t t on tbTestH.MyID = t.MyID order by t.TempNodeIDGO
I am using the wizard to create a new mining structure and getting the error at the time of selecting the datamining technique like either microsoft decision tree or microsoft timeseries etc.,
"Unable to retrieve a list of supported data mining algorithms. Make sure you are connected to the correct Analysis Services server instance and the Analysis Services server named localhost is running and configured properly. You can continue with a default list of data mining algorithms."
I have a SQL 7 database with about 30 tables in. For documentation purposes I need to print out the structures of each table. I have tried using the diagram tools but no joy.
I have access to Access, Crystal Reports, if these are any use?
I've just read Jeff's query regarding passing an array to a stored procedure, and it seems to be a question that hits the message board quite frequently. There often seems to be the case where you have some block of data in one location, and want to process it somewhere else, and there just isn't an easy way to do it (or is there?). Passing a cursor is possible, but getting data back out of a cursor is painful.
The ways I have used in the past are:
Using a cursor Using a global temporary table Using a "permanent" table, but declared with a unique name (using a timestamp) and passing the tablename as a parameter.
Has anybody any suggestions for a future SQL release which we could suggest to Microsoft?
Suggestions from myself: -Having a single TSQL command to copy a cursor back into a table, to make cursor parameters less cumbersome -Being able to pass a pointer to a temporary table as an SP parameter -Allowing variables to be declared with more complex structures (such as arrays, or classes)
I will soon be writting a script which will search threw all tables in my database and try to find how they may be linked to one another, and also which tables are called by which stored procedures, any suggestions or tips would be great.
Hi, I have a table of folders that I would like to be able to displayin a depth first manner, similar to what you would see in WindowsExplorer. the table is defined similar toCREATE TABLE Folders (folderID int,parentFolderID int,name varchar(32)).... I've left some things out of the definition for the simplicity ofexample.My desire is to quickly generate a record set containing the all childfolders of a specific folder along with the how many levels deep eachfolder is, in a quick and memory efficient manner. I have done someresearch through this group and found a few examples similar to what Ihave hypothesized, however I was hoping for some feedback on what Ihave decided to use.Since this database table is already created and I cannot modify theexisting schema, it would not be very feasible to use a nested setsmodel (unfortunately -- or if anyone has a suggestion?).I plan on using a stored procedure that recusively calls itself.something similar to the following:/* This is assuming the table #TempFolderTable has* already been declared globally*/sp_FolderDisplayRecurive @folder int, @level int ASIF @@NESTLEVEL > 31 RETURN;DECLARE @childFolder INTDECLARE folders_cursor AS CURSOR LOCAL FORSELECT folderID FROM FoldersWHERE parentFolderID = @folderIDOPEN folders_cursorFETCH NEXT FROM folders_cursorINTO @childFolderwhile @@Fetch_status = 0BEGIN-- Insert current folder into-- folder table (under parent @folder)INSERT INTO #TempFolderTable(FolderID, DepthLevel, ParentFolderID) VALUES(@childFolder, @level, @folder)-- Iterate over all of its childrenEXEC sp_FolderDisplayRecursive@childFolder, @level + 1FETCH NEXT FROM folders_cursor INTO @childFolderEND...Will this incure a large overhead for a deep directory structure? Arethere any other problematic issues? Any suggestions and/or commentsare very welcome. Thanks!
Let us suppose that I have two similar databases and need to create ansql-script upgrating one database structure to another. For example, thesedatabases are from different versions of some software, first is from earlyversion, next is from current, and second one contains several new tables,sevelal new fields in old tables, several new or changed stored procedures,UDFs and so on.How to solve this problem using standard tools?