Insert into #Customers values(101,'Aron',23,1,1,12,1,0);
Insert into #Customers values(102,'Cathy',28,1,1,13,1,0);
Insert into #Customers values(103,'Zarog',33,1,1,14,1,0);
Insert into #Customers values(104,'Michale',25,1,2,12,1,0);
Insert into #Customers values(105,'Linda',43,1,2,13,1,0);
Insert into #Customers values(106,'Burt',53,1,2,14,1,0);
If you observe, the rows are unique based on the internalid per st_code,per city_code
Problem :
Now the user inserts another row but this time he passes only the following :
Insert into #Customer values(120,'AronNew',null,1,1,12,null,null) -
Note he doesnt pass the age or the type
I want that when he passes this row, i match up this row with the existing row based on st_code,citycode and internalid and then update the new row with the missing values(only columns with null) that were there in the existing row
I want to display some stats using captured information. the x-axis is the date on which something occured. the y-axis is the number of occurance on that day.
The problem is that the x-axis need to be of continous range, so say from 1st Mar 2006 to 8th Mar 2006, I need to display every date within that range.
I've gathered the neccessary stats with a group statement but I was hoping I can fill in the gaps without having to loop through the returned data to identify missing dates, is there any built int SQL Server function which will allow me to do this within my SELECT statement?
I have a table that keeps track of click statistics for each one of my dealers.. I am creating graphs based on number of clicks that they received in a month, but if they didn't receive any in a certain month then it is left out..I know i have to do some outer join, but having trouble figuring exactly how..here is what i have:
select d.name, right(convert(varchar(25),s.stamp,105),7), isnull(count(1),0) from tblstats s(nolock) join tblDealer d(nolock) on s.dealerid=d.id where d.id=31 group by right(convert(varchar(25),s.stamp,105),7),d.name order by 2 desc,3,1
this dealer had no clicks in april so this is what shows up: joe blow 10-2004 567 joe blow 09-2004 269 joe blow 08-2004 66 joe blow 07-2004 30 joe blow 06-2004 8 joe blow 05-2004 5 joe blow 03-2004 9
In my application I am using Identity columns. When some rows are deleted from table, This identity values are not filling the gap. I mean My current identity is 5. That means 1 to 5 rows sequentially i inserted. If I am deleting 3rd and 4th rows, next identity will still continue with 6. So is there any method to fill the gap between rows
Hi I am currently working on an application that uses a stored procedure to retrieve data from a database and then display it in a web page. My problem is that some of the data in the database will be images, I am currently putting in test data to test my code/procedures my problem is how do I put in test data for images, when I am finished I am going to add an admin section that will allow me to add images that way but how do I go about adding them to the database until then? I have set the field to the image data type but have no idea how to relate this to an image on my server? Thanks, Adam
I have a database [CarlosDB] that currently has it's .MDF on E: and I need to move the x2 .NDF data files off C: to E:data using a single T-SQL statement:
Looking at the file configuration above, what would be the most logical way as a DBA / SQL Server 2014 Std to move the NDF files to live w/ the MDF file using:
Recently, we are modifying some table structure in my Sql2K database. Hence, some columns are dropped or renamed. However, when we use Query Analzyer to modify or update some related stored procedure, it does not flag those missing columns as error in it.
What is wrong, any fixes for this issue. Thank you
We are trying to find out the difference between tables in CUSTOMER database and CUSTOMER_coded database. The goal is to find out if there are any columns missing in each table of CUSTOMER_coded database.
We need the list of tables in CUSTOMER_coded database that misses some column compare to its peer in CUSTOMER database (list of columns being missing also).
I googled, but I get only all the columns in tables of database.
I need missing columns of all the tables when we compare these 2 databases( CUSTOMER and CUSTOMER_coded databases).
I was using the MDX Query Builder to create MDX queries for a SSRS report. I'm not sure what happened, but when I tried to create another dataset against the cube, the "Drop Column Fields Here" and "Drop Row Fields Here" areas were no longer available for me to drop attributes onto.
I have restarted VS, rebooted, you name it, I've tried it (short of re-installing). Has anyone encountered this and how did you "fix" it.
BTW: In order to continue working, I decided to use ProClarity to build the MDX for me and when I tried to paste it into the MDX editor, I get the following error: "The query cannot be prepared: The query must have at least one axis. ..". So, as I've seen from other posts, you can't use "any" MDX in the MDX Query Builder.
I'm new to SSIS, and trying to automate data imports from text files. The text files I'm importing always contain a fixed set of columns, or a subset of those columns. If I include a subset of columns in the import file (and exclude others), the data doesn't import...I assume because the actual file doesn't include every column defined in the flat file source object?
Is it possible to accomplish this without dynamically selecting the columns, as indicated here: http://msdn2.microsoft.com/en-us/library/ms136020.aspx
I have to load on SS2012 hundeds of excel files produced by an application over the last five years, during time few columns have been added to the initial set.I created on SS2012 a table to match with the full set of columns and want to load all the files inside the table leaving the missing cells to NULL. I think SSIS can do the job but every trial failed do far.
I'm using Script Component to load data into Oracle DB due to the poor performance issue. Now, I found it will missing some data during the transmission. Please see the screenshot below:
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):
Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.
This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.
Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.
Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.
Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS
The project is a C/S data analysis system built with .Net 2.0 in windows environment, OS: Microsoft Windows 2003 R2 standard Edition Service Pack2, Database used in this project is: Sql server 2005. As a data analysis system, we need to load large amount of data from file to database, we do it by create a dts package and then do data loading by execute "m_Package.Execute(null, variables, m_PackageEvents, null, null)".
The problem is, we fount that DTS miss some data randomly sometimes, we can't find the rule till now. for example we've data as follows in data file, all data field splited by '|' 11234|26341|2007-09-03 00:00|0|0|0.0|0|0.011470833793282509|1|0.045497223734855652|0|0|1|0|3|2929|13130|43|0|2|0|0|40|1|0|0|0|0|0|1||0|0|3|0|0|0|0|0||0|3|0|0|43|43|0|41270|0|3|0|0|10|3|0|0|0|0|0||0|1912|0|0|3|0|0|0|0|0|0|0|3|0|0|5|0|40|0|9|0|0|0|0|0|0|0|0|29|1|1|24|24.0|16|16.0|0|0|0.0|0|0|24|23.980693817138672|0|0.0|0|0.0|0|0.0|0|0.0|11|2.0764460563659668|43|2|0|0|30|11|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|3|3|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|6|0|0|45|1|0|0|0|2|42|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|0|0|2|0|0|0|0|0|0|51|47|85|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||0|0|0|0|0|97.117401123046875|0|0|83|57|||0.011738888919353485|0|1|0.065955556929111481|0|4|||0.00026658669230528176|1|0.00014440112863667309|1|68|53|12|2|1|2.0562667846679688|10|94|2|0|0|30|11|47|4|13902|7024|6878|18|85|4.9072666168212891|5|0.0|0|0.0|0|0.0|0|0.0|0|358|6448|6324|0|0|0|0||0||462|967|0|41|39|2|0|0|0|1|0|0|0|0|0|0|0|0|3|0|0|3|0|0|0|0|0|0|0|0|0|3|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|46|0|1|0|1|37|0|0|46|0|1|0|1|37|0|0|0|0|0|0|0|0|0.0|0|0|6|4|2|0|0|2|1|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|1|0.012290795333683491|0|44|44.0|0|0.0|0|0|0|30|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|0|0|0|0|0|0|0|0|0|0|0|0|27|0|0|2112|411|411|45|437|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|4|0|4|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|1|0|0|0|0|0|0|0|0|0|0|0|6|6|0|3|2|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|5|5.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|600|600|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|0|6|0|9|1|2|2|3|0|1|0|0|0|0|0|0|0|0|0|0|0|13|3|2|5|1|1|1|0|0|0|102|0|1|1|0|0|0|3|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|46.0|46|0.0|0|0.0|0|0.011469460092484951|1|0.0|0|0.0|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|100.0|100.0|0|1|0|1|0|0|0.02481505274772644|1|0.014951236546039581|1|0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|4695.5556640625|42260|7126.66650390625|62040|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||||||0|0||||||||||
We found that some of the data field become 'null' after the load action finished, if we load the same data again, problem disappeared, we can't 100% reproduce this issue each time, we don't know why, Anybody here can help us to solve this issue or give us some clue?
Hello all, I am using the Import Wizard to pull in data from an Excel spreadsheet. One column in particular SQL Server sees as a float data type but it contains varchar data. So I change this in the wizard but some of these values are missing when I select * from Sheet1$ in SQL Server 2005. Any ideas why this would happen? I have formatted the particular column as text in Excel.
Using Business Inteligence I have created a report that runs off a stored procedure. This report has three parameters. For this example we will say the parameters are A, B and C. parameters B and C can be passed a NULL. The Stored procedure when run on the SQL Server 2005 runs fine and gives me the expected data. I then created a report based off this Stored procedure. If I run the report and pass it parameter A and either Parameter B or C it works fine. If I run the report and only pass it parameter A then I do not get any data. I know the Stored procedure works because as I saind when I run it on the Server it runs fine. So the problem must be in the Report somewher, but I can't figure out where. I have other reports that use stored procedure and some with parameters that I pass a null value to and they work fine. But this one report is not pulling the data.
Has anybody heard of this before or have any ideas?
I found a big problem. When we do a report it seems to be missing records. I am sure they should not be getting filtered out, given the filter in the report. I ran profiler on the SQL box it was pulling the data from, and it seems that they are all there. If I sort by the ID number of the records, then find the place it should be several are missing. But If I change the filter so that it filters everything but those records they show up. How could adding a filter cause more records to show up?
I faced a problem. I found all data/records of tables in a few databases were missing yesterday. I don't know what had happened. Can anyone tell me why? Can anyone tell me how to trace the root of the problem. Any log files I can trace? I fear it will happen again. My server is sql2000 and run on win server 2000.
I have a sql 2000 database in which reports are generated on a monthly basis from the data inside on of my tables. The reports have been working fine, until some of the rows seemed to have disappered!
I know the data use to be in the table, since it is showing on the old reports, however, when I try to pull that same data, it is not in the database at all.
Does anyone have any ideas on what could have caused this or how I can resolve??
I am working with some data that is not so clean. I have some rows that are Very close to being a duplicate row. I need to take all the rows with the same SKU number and fill in the missing data for the Price, Cost or the SalePrice, what ever may be missing. The data is set like @Diamonds MissingData and I need the row to look like @DiamondsWithData table. I can remove the duplicate values once I get the missing data filled in.
I have a billing database with patient names in it. I received a tab delimited file from insurance plan of our roster of assigned patients.
I now want to compare the insurance roster to our database to see who is missing.
The roster is layed out like this (info jumbled to protect privacy): Eligibility List Sample Last Name First Name Date of Birth Gender Insured ID VW Acct # ALLEN CARRIE A4/16/1939FDH36664A572576-02 BAKER AMBER S11/24/1956FFXI2824C596439-02 BARKLOWLOREN R12/15/1956MKVF0092A588878-01 BRENNANPATRICIA A 1/14/1959FFXI8763A549675-02 BROWN MARTHA E8/14/1967FBD65508A366963-02 CALDWELL MICHAEL V 12/19/1969MLR500N2J595087-01 CLARK CYNTHIA A4/24/1971FVO600M8O596011-02 DEMPSTER SCOTT A 2/21/1976MCC85242A573371-01 DUNNE ANNETTE M10/26/1976FAE88375D598423-02 DUNNE CHRISTOPHER M 8/1/2021MBV81536A598423-01
I have loaded the text into an Excel Spreadsheet to work with it.
I was able to query our patient profile data base to get people with this insurance plan...but of course the data is never an ideal match.
For instance, some of the roster patients above have Middle Initials Concatenated to the First Name. In my database it is a mixed bag of missing initials, initials concatenated to first name or initials in separate Middle field. Thus a strict match on name is not going to work.
Date of Birth should hopefully be valid between both data sources.
Probably the best source of data to validate on would be the VW Acct# as I trust this to be the same in both sets of data. However in my patient data base it is buried in a note field preceded by a "Vital Works ID: " and then the number 602659-02. Generally it is the first part of the note field, but there could be additional notes preceding or trailing this Vital Works ID info.
An example of the query I was able to pull from the patient data base is as follows:
LastFirstMiddleDate of BirthGenderNotes ClarkLawrence J9/7/1955MVital Works ID: 7575-01 ClarkKayleeann NULL1/3/1955FVital Works ID: 7575-02 ColeCodyNULL8/19/1948FVital Works ID: 8771-02 snt ref req to ohms for impact appt tbs Sent ref req back to ohms for Impact-DX. CreaseyWadeL7/9/1988FVital Works ID: 602659-02 KennyRoyJ2/27/1953FVital Works ID: 602679-02 UttJannieC4/11/1984MVital Works ID: 602715-01 WestAliciaG9/9/1992MVital Works ID: 602736-02 WrightMinnieO2/17/1991MVital Works ID: 602736-03 YankeeDonald E10/27/1996MVital Works ID: 602762-03 YankeeStephana A4/4/2001FVital Works ID: 602762-04
How could I now construct a query that would tell me what patients were in the eligibility roster that didnt have a match in the patient database?
I would like to then save that to Excel or somewhere that I could print it out from so I can have someone up date the database.
Hi all, I found problem with my database and was wondering if anyone here could shed some light on the issue.
I have two tables, Absences and AbsenceDates. The first one records the absence of an employee and the second one records a record for each day of the occurance. I do a full select on the second table and I see primary keys that do NOT exist in the select of the second table. so I dug further and here is what I found.
Select * from Absences (rowcount in Query Analyser is: 20883) Select * from Absences Order By AbsenceID Desc (rowcount is 443)
The second select contains the data that I am missing in the first select. So, I called a friend and they said to run DBCC CHECKDB and I did. The data came back as follows...
DBCC results for 'Absences '. There are 21337 rows in 243 pages for object 'Absences'. CHECKDB found 0 allocation errors and 0 consistency errors in database 'EmployeeAbsenteeism'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
now if you add up the rows that the other two selects return it comes to 21326, not 21337. I am assuming that the value that DBCC gets is from sysobjects and that some sort of update would need to be run for it be accurate. This I don't care about too much, what I really need is for my main select statement to return ALL of the data, not just what it feels like returning.
My experience is with programming mainly (6 years in .net) and not DBA, so any help would be greatly appreciated.
Cheers, Brent
Sorry, this was supposed to go to data corruption forum... reposting..
Hi all, I found problem with my database and was wondering if anyone here could shed some light on the issue.
I have two tables, Absences and AbsenceDates. The first one records the absence of an employee and the second one records a record for each day of the occurance. I do a full select on the second table and I see primary keys that do NOT exist in the select of the second table. so I dug further and here is what I found.
Select * from Absences (rowcount in Query Analyser is: 20883) Select * from Absences Order By AbsenceID Desc (rowcount is 443)
The second select contains the data that I am missing in the first select. So, I called a friend and they said to run DBCC CHECKDB and I did. The data came back as follows...
DBCC results for 'Absences'. There are 21337 rows in 243 pages for object 'Absences'. CHECKDB found 0 allocation errors and 0 consistency errors in database 'EmployeeAbsenteeism'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
now if you add up the rows that the other two selects return it comes to 21326, not 21337. I am assuming that the value that DBCC gets is from sysobjects and that some sort of update would need to be run for it be accurate. This I don't care about too much, what I really need is for my main select statement to return ALL of the data, not just what it feels like returning.
My experience is with programming mainly (6 years in .net) and not DBA, so any help would be greatly appreciated.
Cheers, Brent
**** UPDATE ***** I tried running the following sql select * Into Absences1 From Absences the results were: (21337 row(s) affected)
now, I did a select * from Absences1 returned 20883 rows.. :(
****** UPDATE ******* I ended up fixing the following way. Select * Into AbsencesFix WHERE PrimaryKey < 110577 GO INSERT INTO AbsencesFix ({allfields}) (Select {allfields} FROM Absences WHERE PrimaryKey > 110577) Go sp_rename 'Absences', 'Absences_oldCorrupt' GO sp_rename 'AbsencesFix', 'Absences' Go
This little script did the trick, I can now select all the data in the table.
Hope the fix that was given to me by another DBA friend of mine can help someone out who has a similar problem.
After loading the BCP files that are created during the trigger/reporting events I've noticed that the data in the table is missingrecords. I've also noticed that the missing records (records in thetable but not in the BCP out files) seem to occur in contiguousblocks. Since the complete set of records exists in the table, Iassume this points to an issue in the way the TableUpdate script/Triggers interact with the system. But i tried to take out the bcpprocedure, do test on trigger, then no data missing, So, I think theproblem is still on bcp part. Could you help me with that?CREATE TABLE [EventUpdate] ([id] [int] NOT NULL ,[eventid] [int] NOT NULL ,[sequenceid] [int] NOT NULL ,[UpdatePass] [int] NULL) ON [PRIMARY]GOcreate trigger trgEventUpdate on EventLog For Insert,Update asinsert into EventUpdate (id,eventid,sequenceid) select ins.id,ins.eventid,ins.sequenceid from inserted insbcp script:bcp "select a.* from w..eventlog a, w..eventupdate b wherea.eventid=b.eventid and a.sequenceid=b.sequenceid and b.eventid<>-1and b.sequenceid<>-1 and b.updatepass=1" queryout 30sec-%TFN_NOW%.wrk -U <userwithaccess-P <password-S doserver -f EventLog.fmtThanks in advance for your reply!
I wanted to demonstrate how SSIS can easily read an Excel file into a database. I started up the wizard (dtswizard) looked and looked by couldn't find "Excel" in the list of datasources. Next I started up the SSIS IDE, found an Excel Destination, and created a flow that errored out on the copy to Excel step.
>>> SSIS package "Package.dtsx" starting. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Error: 0xC0202009 at Package, Connection manager "Excel Connection Manager": An OLE DB error has occurred. Error code: 0x80040154. An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040154 Description: "Class not registered". Error: 0xC020801C at Data Flow Task, Excel Destination [666]: The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. Error: 0xC0047017 at Data Flow Task, DTS.Pipeline: component "Excel Destination" (666) failed validation and returned error code 0xC020801C. Error: 0xC004700C at Data Flow Task, DTS.Pipeline: One or more component failed validation. Error: 0xC0024107 at Data Flow Task: There were errors during task validation. SSIS package "Package.dtsx" finished: Failure.
>>>
I don't understand that message. I suspect the problem is that Excel is NOT installed on the box where I'm running the package. Does that seem right to you? Also, if Excel is needed on the box, how can I develop such a package on a laptop (in the airport) that doesn't have Excel?
Confused, but good!
Barkingdog
P.S. As the saying goes, "I never had this much trrouble importingexporting an Excel spreadsheet in sql 2000 DTS."
I am working with a database containing time series data. In many, cases there is missing data. For example, while there might be a value for 2001-01-01T23:00:00, there is none for 2001-01-01T23:0100 (one minute later). I would like to replace the missing data with data from the previous record (if the previous record is the same date). Is that possible with T-SQL?
I am using Transactional Replication as a backup for our main database. The subscriber database however is smaller than the original database. Is there something that I'm missing? Is the subscriber missing data?
Hello All I am wanting to fill a drop down list in ASP.NET using C# from a SQL database table using a stored procedure. I have my Sproc. But using ASP.NET C# I have no idea how to do this. Can someone give me a good example, and if not too much trouble, place comments in the code, and give an explanation. I am just learning ASP.NET after moving from Classic. Things are alot different.
I have a ton of data to load into a SQL 2005 database. I just loaded a bunch of data for a number of tables using bcp, and the last table that my script loaded was an 8 million row table. The next table was a 12 million row table, and about 1 million rows into the bcp'ing a log full error was incurred. I have the batch size set to 10000 for all bcp commnads. Here is the bcp command that failed:
Here is the last part of the output from the bcp command:
... 10000 rows sent to SQL Server. Total sent: 970000 10000 rows sent to SQL Server. Total sent: 980000 10000 rows sent to SQL Server. Total sent: 990000 SQLState = 37000, NativeError = 9002 Error = [Microsoft][ODBC SQL Server Driver][SQL Server]The transaction log for database 'billing_data_repository' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
BCP copy in failed
I thought that a commit was issued after every 10000 rows and that this would keep the log from filling up.
The log_reuse_wait_desc column in sys.databases is set to 'LOG_BACKUP' for the database being used.
Does a checkpoint need to be done more often?
Besides breaking up the 12 million row data file into something more manageable, does anyone have a solution?
How can I continue to use my same loading script, and keep the log from filling up?