I have a problem with SQL 2000 here...
I juz imported the same set of data to my database table and it gave me duplicated records as all the data is imported although there is same existing data.
Can anyone help me with this?
How can i import the data such that the data which already exists in the database will not be imported in again?
While recently working with several mining models, I came across something that struck me as pretty odd - and I'm hoping to find an explanation for the behavior.
Consider the following setup:
A single table in the relational database represents the only case table A single, continuous column is the predictable A mining structure has been created
The mining structure contains a single model, based on the MS Decision Trees algorithm Input columns were selected for the model via the BI Studio wizard (i.e., those provided via the "Suggest" button) The structure has been fully processed Now, the interesting parts:
I view the scatterplot for the mining model, under the Mining Accuracy Chart tab Back on the Mining Structure tab, I delete one of the input columns I add the same column back into the structure The structure is fully processed again When I view the scatterplot for the mining model, under the Mining Accuracy Chart tab, a different set of data points are presented for the model predictions A different set of decision trees under the Mining Model Viewer tab confirms thisHow could different patterns have been found this second time around, even though all of the input columns were the same (as well as the training cases)?
(Note: I encountered this situation while creating a new mining model that was identical to an existing one. Even though the models received the exact same inputs and training cases, they yielded different results. I was able to reproduce the behavior by using steps 1-6 above, though.)
Can someone provide some insight on this behavior, or some kind of explanation of what may be happening?
I have a table with 22 million Business records. I can see that there are duplicates when I group by BusinessName and Address and Phone. I'd like to place only the duplicates into a table, with a ranking, oldest business key gets a ranking of 1.
As a bonus I'd like each group to have a distinct group name (although not necessary, just want to know how to do this)
Later after I run more verifications to make sure these are not referenced elsewhere I'll delete everything with a matchRank > 1 out of the main Business table.
DROP TABLE [dbo].[TestBusiness]; GO CREATE TABLE [dbo].[TestBusiness]( [Business_pk] INT IDENTITY(1,1) NOT NULL, [BusinessName] VARCHAR (200) NOT NULL, [Address] VARCHAR(MAX) NOT NULL,
Hi! I am joining 3 tables in SQL , I am getting the results I want exept it's duplicated. So the resultinmg table fom my stored procedure has 3 rows that have the same bulletin. How do I filter the storedprocedure to output only the rows that don't have duplicate entries for the column 'Bulletin' Thanks. Here is my stored procedure:PROCEDURE [dbo].[spGetCompBulletins] @Userid uniqueidentifier OUTPUT,@DisplayName varchar(200)
AS
SELECT * FROM dbo.UserProfile INNER JOIN dbo.bulletins ON dbo.UserProfile.UserId = dbo.bulletins.Userid INNER JOINdbo.Associations ON dbo.Associations.BusinessID = dbo.bulletins.Userid WHERE UserProfile.DisplayName=@DisplayName and Userprofile.Userid = @Userid ORDER BY Bulletins.Bulletin_Date Return
I'm relatively new to SQL and I've come across something that doesn't seem quite right. When an insert becomes part of an transaction I notice an exclusive KEY lock in Enterprise Manager. The table in question was using a Clustered index but I changed that, dropped the table and brought it back in but I still get the lock which keeps all others out of the table. Is this the expected behavior or is there something I am missing? Could the size of the tabe affect things? This is a very small table currently. I'm using MSSQL 7 sp3.
I need to copy data from 3 tables in one database into another db. The destination db already contains some data and it is expected that there will be duplicates which we do not want to have copied across (I think there is a constraint that prevents duplicate email addresses which is our main search field)
The three tables are effectively a user table, an address table, and a [phone] numbers table, each of which has an auto generated id field. The user table also maintains a reference to the address and numbers tables.
We are using SQL Server 8 (SP3) and it has been suggested that I use the data transformation service (DTS) tool which I have used numerous times to copy entire databases, but I can't figure this bit out.
I am still learning t-sql using SQL Query Analyzer, but have been doing so for a while and think that I'm fairly competent in it. My main question is this: Is it possible to connect to two DBs at the same time in SQL QA? If so, I'm pretty sure that I could work out how to pass the data across, I'd just need to know how to connect to them both.
Any help would be much appreciated. If you need any more information to help, please let me know.
I am currently trying to import data from a table in 1 database into a table of the same name in another database. This in it's self is simple, however to add a twist to the proceedings there is data that exists in both tables. I just want to import the data that doesn't exist in the table I am importing into.
Please can you advise as to the best method to use
Hiya everyone,I have two tables in SQL 2000. I would like to append the contents ofTableA to TableB.Table A has around 1.1 Million Records.Table B has around 1 Million Reocords.Basically TableA has all of the data held in TableB plus 100,000additional records. I would only like to import or append these newadditional records. I have a unique index already setup on Table B.Any ideas pretty pretty please?Paul.Ps. (Have been messing around with DTS but get a unique violation error- Which is kinda what I want I guess, but would like SQL to ignore theerror and only copy the new data - if only)
I created a report that uses a parameter to return a record per page basically, but it doesn't return all records that match the parameter. Thoughts?
Specifically, I have a table that tracks various sites my engineers are responsible for; each have about twelve. The problem is that every time I select an engineer I only see six pages (records). When I select two or three, then I see 12 or 18 pages, but still just six records per engineer.
have a SQL2K/VB.NET05 -based website that uses a complex search query, whose results will contain additional logic to be evaluated. There are thousands of records and growing, so it is not feasible to code this within the program...it must be evaluated inline or after the query, and it is also not feasible to set up additional fields and tables to handle the logic.
For a very general example: In the .NET code, the following variables are recognized: sex="M" Paid=4350.00 Outstanding=28000.50
One of the query result fields will contain the additional logic to evaluate and another will tell the type of expression..
EVAL EXPR BOOL Sex='F' and Paid/Outstanding < 27.50 BOOL Sex='M' and Paid/Outstanding < 38 or Sex='F' INT Paid*52.33
In other words..the thousands of records being returned have their own additional logic to evaluate. Is there a way this can be done by importing the variable into SQL server and testing it during the query?
If not, is there a way that I can run the code in the middle of .NET? I know I could run scripted code while in ASP, but ASP.NET is compiled, so I dont know if it can be done there....
I have a report that is using the following expression for one of the fields:
=Split(Fields!FromAccount.Value, " ")(1)
The format of the number is xxxxx-xxx xxxxxxxxxxxx
So the function is to grab the second section of numbers after the space. The query for this report can bring back 1 to many results which equals one to many pages.
I have an issue where the query returns three results. The first page will display the correct number, the second page displays #Error and the third displays the number.
When I run the query from Management Studio I see the numbers are as follows:
Note the first and and last lines have extra spaces which I thought would be the cause of the problem though I would've expected the #Error to appear on the first and third page rather than the second one.
When I remove the extra space for the first and third number all three pages display their values correctly.
However, there are many, many numbers in the table that have one to two spaces. These numbers are from a spreadsheet that is imported biweekly. So it was either I fix the spreadsheet before every import or I come up with a new expression to check for one or two spaces. This is what I came up with:
Now, the first page displays the number correctly while the second and third page displays with the #Error. Basically I want to say if the value contains two spaces then split at the two spaces otherwise split at the one space.
I have developed a custom authentication extension to Reporting Services 2005, using Visual Studio 2005 C#.
In local integration tests the extension behaved as expected, honouring the role based security of our main system. Following the deployment steps laid out from numerous sources all worked perfectly. We were optimistic.
I've since deployed to a 'live' user acceptance staging server, using the same procedures used in Integration and it's behaving incorrectly.
Initially the log in page is displayed as expected, at this point SQL dumps occur in the RS Log directory, the same as is seen here: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1750128&SiteID=1
Hopefully there'll be a resolution to that soon. As described in that topic, the main page of Report Manager appears.
After this it all goes wrong.
Sometimes we can browse to a folder and run a report. But after running the report, trying to run something else, or browse back to the containing folder we are presented with the error page with the message:
The permissions granted to user '' are insufficient for performing this operation.
Clicking on the home link brings up the top title and nothing else and you have no option but to close the site and re-login.
Sometimes you get the above message before you can even run a report. The error logs have the message listed at the end of this post.
Other times you log in to the website and the folders don't even appear, just the top title bar.
I have since added some verbose logging in each of the implemented methods in the extension DLL. The permission checking method return true and all the various data in each method appears as expected.
I have checked, checked again and re-checked all the configuration files, they match the local Integration files, barring the machine specific keys etc, they weren't just overwritten.
The location of the log directory where my internal logs are written is read from the configuration files via the SetConfiguration methods, if that wasn't set correctly then there wouldn't be any logs at all, so the configuration is being read correctly by the extension.
One other think that I've noticed is that the back door user that is configured in the config files works perfectly, but I can't see how this can make any sort of a difference as it returns the same result in the extension DLL as a 'normal' user does at the same point in the code.
Can someone please help me, and my poor scalp, it's losing hair at a rate of knots.
Set up: Windows Server 2005 SQL Server 2005 Service Pack 2 (Developer) IIS 6
Error message in the log files:
w3wp!library!1!27/06/2007-10:04:20:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user '' are insufficient for performing this operation., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user '' are insufficient for performing this operation. w3wp!ui!1!27/06/2007-10:04:20:: e ERROR: The permissions granted to user '' are insufficient for performing this operation. w3wp!ui!1!27/06/2007-10:04:20:: e ERROR: HTTP status code --> 500 -------Details-------- Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user '' are insufficient for performing this operation. w3wp!ui!1!27/06/2007-10:04:20:: e ERROR: Exception in ShowErrorPage: System.Threading.ThreadAbortException: Thread was being aborted. at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) at at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg)
I have designed a CV database with complete CV stored in a TEXT field. There is a keyword search which queries the TEXT field also. The query conditions are defined in T-SQL submitted through an ASP page. There is about 20,000 records now. Now while querying the database for keyword search I am receiving time out errors. Is there any solution other than Index server to rectify this situation. How can I speed up the query execution time. Please advise.
I have two reports. One is the main/summary report and other one is drill through. When I pass the Start and End Date parameters from main to the drill, the original format of DateTime changes. For example, in main report the data is displayed for following date range:
4/7/2007 - 5/9/2007 (i.e 4 July 2007 to 5 Sept 2007)
which displays correct data.
However, when I click on the drill through link, it jumps to the drill through report but displays data for the following period:
7/4/2007 - 9/5/2007 (i.e. 7 Apr 2007 to 9 May 2007)
The reporting services is converting the value from one format to another of the report parameters when passing them from parent report to the drill through. When run individually, these two reports display data for correct date range. And you can imagine, the child report crashes with rsReportParameterTypeMismatch error if the start or end date had a day part greater than 12 (e.g 25/4/2007).
I can't understand what could be going wrong. All the parameters in both reports are datetime, so intrisically, it shouldn't matter even if the reporting services is converting or using different date formats as long as the data type remains the same. Is there a way to fix this and force the parameters to stay in the format they are provided in the main report?
I setup this package to import data from a Sharepoint list to a SQL Server data table. The primary key of my SQL table is mapped to the Title column of my Sharepoint list. There is a possibility that duplicate values will be entered in the Title field of the Sharepoint list. So when importing data into my table via SSIS, my package always error-out when there it comes across duplicate values. how you others have managed data integrity when importing from a Sharepoint list with the Title column being mapped to the primary key of a table.
I have one column in SQL Server 2005 of data type VARCHAR(4000).
I have imported sql Server 2005 database data into one mdb file.After importing a data into the mdb file, above column data type converted into the memo type in the Access database.
now when I am trying to import a data from this MS Access File(db1.mdb) into the another SQL Server 2005 database, got the error of Unicode Converting a memo data type conversion in Export/Import data wizard.
Could you please let me know what is the reason?
I know that memo data type does not supported into the SQl Server 2005.
I am with SQL Server 2005 Standard Edition with SP2.
Please help me to understans this issue correctly?
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.Â
I have created a simple package that uses a sql command to pull data from an oracle database and inserts the data into a sql 2005 table. Some of the data fields that i am pulling from contain two digits after the decimal point, however this data is lost when it gets into sql. I have even tried putting the data into a flat file, and still the data is lost.
In the package I have a ole db source connection which is the oracle database and when i do the preview i see all the data I need. I am very confused and tried a number of things to get the data into sql, but none work. Any ideas would be very helpful.
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.
when i m importing data from excel to Sql using DTS the column which has text content was not imported as same in excel sheet. whereas a special character is appearing in between the lines. the text field contains multiple lines but the conetent is imported in single line .
I'm wondering if SSIS will be the solution to the problem I'm working on.
Some of our customers give us an Excel sheet with data they want to insert or update in the database.
I've created a package that will take an Excel sheet, do some data conversion so the data types match up and after that I use a Slowly Changing Data component to create the insert/update commands.
This works great. If a customer adds a new row to the Excel sheet or updates an existing row changes are nicely reflected in the database.
But now I€™ve got the following problem. The column names and the order of the columns in the Excel sheet are not standard and in the future it could happen a customer doesn't even use an Excel sheet but something totally different.
Can I use SSIS for this? Is it possible to let the user set the mappings trough some sort of user interface? I€™ve looked at programmatically creating the package but I€™ve got to say that€™s quit hard to do€¦ It would be easier to write the whole thing myself than to create the package trough code ;)
If not I thought about transforming the data in code before I pass it on to the SSIS package in something like XML. That way I can use standard column names and data types.
So how should I solve this problem? Use SSIS or not?
I'm new to SQL and DTS packages. I am trying to import data from an excel spreadsheet to an SQL server table via DTS package. It seems that the excel task looks at the first few records in a column to determine the datatype for that column. If the first few records are text, the entire column is imported as text. If numeric, the entire column is imported as numeric. There are about 25,000 records. In one field, the most important one, about half of the records begin with letters and the rest are all numbers. It is the subscriber ID field, and some subscriber IDs are all numbers, some are letters and numbers. The entire column should be imported as text. However, when I run the transform data task from the excel connection, none of the records that are all numbers are imported. I end up correctly importing only 13,000 of the 25,000 records. The rest are imported with the subscriberID field as <NULL>. I tried using the CAST or CONVERT function in the SQL query, but get the error message "Undefined Function."
hello, I create a txt file with a bash script, and i need to use it in a DTS package. But, i don't know how i can specify the type of my column. So in the transformations task, i have an error due to an incompatible type. what can i do to fix this error ? thanks,
I am creating a DTS package that is combining several tables, converting one column of data to a new column removing all special characters, then exporting the unique data based on this column and another column, and the max of other duplicates to a new table.
Now that I have the data in this table, I want to import any data that is not in my main table.
This "CLEANED" table does not have a designated "key" column, but the table I want to import the unique items does have an ID column that is also a primary key column.
DTS seems to want me to have a Key column to reference when importing from the CLEANED table to the MAIN table.
How would I go about checking the MAIN table against the CLEANED table, having DTS import only the unique items from the CLEANED table that are not present in the MAIN table based on three columns? The rest of the columns I want to just extract the MAX data from the duplicates.
Now here is the query I use to extract the unique values from the "CLEANING" table to get the data to the "CLEANED" table, but do not know how to use this to import into the MAIN table using something similar.
Code:
select partno2, MAX (partno) as partno, alt, MAX (C_alt) as C_alt, Max (cmpycd) as cmpycd, MAX (type) as type, compFN, MAX (pndesc) as pndesc, MAX (equipment) as equipment
into tbl_CLEANED from tbl_CLEANING group by partno2, alt, compFN ORDER BY partno, compFN
The three main columns I need to check against are: partno2 alt compFN I have named the columns the same in both tables.
partno2 is the column that has been copied from partno with all special characters & spaces removed. This is the main column I am using as a reference for unique values, then if no match, I have it check against the alt column, then the comFN column. If there are no matches in any of these columns, then I want to extract the data to the MAIN table.
How can I compare these tables and import only unique info to the MAIN table?
In addition, how can I also check items that are the same in both tables and update the MAX info for the other columns (not the three I use for reference - these I need to leave alone) and update those if there is more data in the CLEANED table then in the MAIN table?
I have a process that calls a proc that BCP's a delimited file into a table. Well the SOX police say a header and footer must be added to the file. Needless to say this screws my BCP process. Does anyone know how to strip a header and footer record from a text file using transact sql or have any other suggestions to strip the records?
I am trying to import data from excel into my server, but get this error message:
Error during transformation 'directcopyXform' for row number 1. Errors encountered so far in this task 1. TransformCopy 'DirectCopyXform' conversion error: Conversion invalid datatypes on coulumn pair 19 (source column '*9' (DBTYPE_WSTR( destination column 'F19' (DBTYPE_R8)