Today I have a very similar situation, only today I am dealing with missing text data, not numeric data.
DECLARE @MissingTextData TABLE
(
RowID int
,UserID int
, EmailAddress varchar(20)
,StreetAddress varchar(20)
[code]...
I would like to fill in the NULL columns with data from the other row, and then select the one row that is filled with all data. I was able to use MAX() for a numeric value, but I am really stumped on the text data. Everything that I have tried is not working.
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
I am looking for a way to convert the following format into a sql table. The format it is Bib Tex.
Essentially a new row in the table would be for each entry, denoted by an @ logo and each column is denoted by an =, as you can see from the example data no one contains all the possible columns and some fields can be over two lines long.
To load this I was considering loading it into a table as each line being a row. Adding a row number, then a column counting the @ signs in order and essentially grouping each record, then for each group running through and looking for the column keywords 'author' , 'title' etc then splitting the data out into those constituent parts using substring and charindex.
@Book{hicks2001, author = "von Hicks, III, Michael", title = "Design of a Carbon Fiber Composite Grid Structure for the GLAST Spacecraft Using a Novel Manufacturing Technique", publisher = "Stanford Press", year = 2001,
Create Table Sample (ID int not null primary key, RefID int , SeqNo int , Name varchar(10) )
insert into Sample
select 1, 1000, 1, 'Mike' union select 2, 1000, 2, 'Mikey' union select 3, 1000, 3, 'Michel' union select 4, 1001, 1, 'Carmel' union
[code]....
select * from SampleI have here sample data given. What I want to do is, I want to check the RefID which is not having proper order of sequence number. If you see the RefID 1000, 1001 they are having properly sequence order in SeqNo field. But it is not in RefID 1002. RefID 1002 does not have proper order. It is because user has deleted a row which was having seqno 2. So i want to get what are all the RefID's are not having properly sequenced. So that I would be able to know these are all the RefID's are affected by delete statement that was done by user.
I am trying to get a count by product, month, year even if there are is no record for that particular month.
Current outcome: Product Month Year Count XYZ January 2014 20 XYZ February 2014 14 XYZ April 2014 34 ...
Desired outcome: Product Month Year Count XYZ January 2014 20 XYZ February 2014 14 XYZ March 2014 0 XYZ April 2014 34 ...
The join statement is simple: Select Product, Month, Year, Count(*) As Count From dbo.Products Group By Product, Month, Year
I have also tried the following code and left joining it with my main query but the product is left out as is seen:
DECLARE @Start DATETIME, @End DATETIME; SELECT @StartDate = '20140101', @EndDate = '20141231'; WITH dt(dt) AS ( SELECT DATEADD(MONTH, n, DATEADD(MONTH, DATEDIFF(MONTH, 0, @Start), 0)) FROM ( SELECT TOP (DATEDIFF(MONTH, @Start, @End) + 1) n = ROW_NUMBER() OVER (ORDER BY [object_id]) - 1 FROM sys.all_objects ORDER BY [object_id] ) AS n )
2nd attempt: Product Month Year Count XYZ January 2014 20 XYZ February 2014 14 NULL March 2014 0 XYZ April 2014 34 ...
What I want is this (as is shown above). Is this possible?
Desired outcome: Product Month Year Count XYZ January 2014 20 XYZ February 2014 14 XYZ March 2014 0 XYZ April 2014 34 ...
Write the query that produces the below results. I'm not ale to join the two sets in a way so that it displays NULLs if no purchase was made on a given day for a particular product. I need NULLs or s so that it shows up correctly on my SSRS report.
;with testdata as( SELECT 1 AS Id,'1/6/2014' AS Date, 21 As Amount UNION ALL SELECT 1 ,'1/8/2014', 25 UNION ALL SELECT 1 ,'1/9/2014', 30 UNION ALL SELECT 1 ,'1/10/2014', 60 UNION ALL SELECT 1 ,'1/5/2015', 3800 UNION ALL SELECT 1 ,'1/6/2015', 7120 UNION ALL
I have recently started using replication in SQL 2012 SP1. When a stored procedure is altered on the source, the changes are replicated to the subscribers; however, the comment headers are removed at the subscribers. Due to the vast number of stored procedures I have, I do not want to move the comments below the Create Procedure statement. Are there any other ways to have comment header move with the stored procedures?
Here is what I am experiencing
Source SP
ALTER PROCEDURE [dbo].[SPTest] AS BEGIN SELECT GETDATE() END
Destination SP
ALTER PROCEDURE [dbo].[SPTest] AS BEGIN SELECT GETDATE() END
missing witness server information and the fail-over is broken suddenly? 4:00am no maintenance job. I have one sql job on 10pm for backup on database transaction log only.
I can see the primary have problem then perform fail-over to mirror database, the auto fail-over was broken.
I re-build the sql mirror is OK , but i want to find the root cause.
Windows application event was full when there have many failed event, i have increase log size for application event.
I am trying to parse data separated through text (ie abc1, abc2, abc3, abc4, etc).
ID ParseData 1 [abc1.Pants/abc2.Orange hat /abc3.Purple shirt /abc4./abc5./abc6./abc7./abc8.] 2 [abc1.Gray Shoes/abc2.Striped jacket /abc3./abc4./abc5./abc6./abc7./abc8.] 3 [abc1.Blue jeans/abc2./abc3./abc4./abc5./abc6./abc7./abc8.]
New Data (abc1, abc2, abc3, etc each have a field in the new data set) ID ParseData abc1 abc2 abc3 abc4 abc5 abc6 abc7 abc8 1 [abc1.Pants...abc8.] Pants Orange hat Purple shirt 2 [abc1.Gray...abc8.] Gray Shoes Striped jacket 3 [abc1.Blue...abc8.] Blue Jeans
If I only want the data in between abc1 and abc2, between abc2 and abc3, etc, what would be the best way to do that?
My code so far looks like: DECLARE @string varchar(100) = '[abc1.Pants/abc2.Orange hat /abc3.Purple shirt /abc4./abc5./abc6./abc7./abc8.]', @searchString1 varchar(20) = 'abc1', @searchString2 varchar(20) = 'abc2';
SELECT newstring FROM dbo.SubstringBetween(@string,@searchString1,@searchString2);
This returns 'Pants.'How do I continue to parse between abc2 and abc3? between abc3 and abc4?And then continue to ID2?Should I be referencing the ParseData field instead of string of data that I want to parse?
I'm trying to parse out a line of data that is separated by the text "atc1.", "atc2." etc.
For example,
[atc1.123/atc2.456/atc3.789/atc4.xyz/]
If I only want the data after atc2., then I could search the string for "atc2." and collect all the characters afterwards. But how can I make sure to trim off all the data after "atc3." to make sure I'm only collecting "456" from the example above?
So I know that each employee should have 2 Type 1's and 4 Type 2's. I hope that makes sense, I'm trying to change my data because ours is very proprietary.
I need to identify employees who do not have all their stages and list the stages they are missing. The final report should only have employees and the associated missing types and stages.
I do a count by employee to see how many types they have to identify the ones that don't have all the types and stages.
My count would look something like this:
EmployeeNumber Type Total 100, 1, 2 100, 2, 2 200, 1, 1 200 1, 2
So I know that employee 100 should have 2 more Type 2's and employee 200 should have 1 more Type 1 and 2 more Type 2's based on the required list.
The problem I'm having is taking that required list and joining to my list of employees with missing data and pulling from it the types and stages that are missing by employee. I thought I could get a list of the employees that are missing information and right join it to the required list where the missing records would be nulls. But, that doesn't work because some employees do have the required information and so I'm not getting any nulls returned.
The project is a C/S data analysis system built with .Net 2.0 in windows environment, OS: Microsoft Windows 2003 R2 standard Edition Service Pack2, Database used in this project is: Sql server 2005. As a data analysis system, we need to load large amount of data from file to database, we do it by create a dts package and then do data loading by execute "m_Package.Execute(null, variables, m_PackageEvents, null, null)".
The problem is, we fount that DTS miss some data randomly sometimes, we can't find the rule till now. for example we've data as follows in data file, all data field splited by '|' 11234|26341|2007-09-03 00:00|0|0|0.0|0|0.011470833793282509|1|0.045497223734855652|0|0|1|0|3|2929|13130|43|0|2|0|0|40|1|0|0|0|0|0|1||0|0|3|0|0|0|0|0||0|3|0|0|43|43|0|41270|0|3|0|0|10|3|0|0|0|0|0||0|1912|0|0|3|0|0|0|0|0|0|0|3|0|0|5|0|40|0|9|0|0|0|0|0|0|0|0|29|1|1|24|24.0|16|16.0|0|0|0.0|0|0|24|23.980693817138672|0|0.0|0|0.0|0|0.0|0|0.0|11|2.0764460563659668|43|2|0|0|30|11|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|3|3|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|6|0|0|45|1|0|0|0|2|42|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|0|0|2|0|0|0|0|0|0|51|47|85|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||0|0|0|0|0|97.117401123046875|0|0|83|57|||0.011738888919353485|0|1|0.065955556929111481|0|4|||0.00026658669230528176|1|0.00014440112863667309|1|68|53|12|2|1|2.0562667846679688|10|94|2|0|0|30|11|47|4|13902|7024|6878|18|85|4.9072666168212891|5|0.0|0|0.0|0|0.0|0|0.0|0|358|6448|6324|0|0|0|0||0||462|967|0|41|39|2|0|0|0|1|0|0|0|0|0|0|0|0|3|0|0|3|0|0|0|0|0|0|0|0|0|3|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|46|0|1|0|1|37|0|0|46|0|1|0|1|37|0|0|0|0|0|0|0|0|0.0|0|0|6|4|2|0|0|2|1|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|1|0.012290795333683491|0|44|44.0|0|0.0|0|0|0|30|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|0|0|0|0|0|0|0|0|0|0|0|0|27|0|0|2112|411|411|45|437|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|4|0|4|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|1|0|0|0|0|0|0|0|0|0|0|0|6|6|0|3|2|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|5|5.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|600|600|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|0|6|0|9|1|2|2|3|0|1|0|0|0|0|0|0|0|0|0|0|0|13|3|2|5|1|1|1|0|0|0|102|0|1|1|0|0|0|3|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|46.0|46|0.0|0|0.0|0|0.011469460092484951|1|0.0|0|0.0|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|100.0|100.0|0|1|0|1|0|0|0.02481505274772644|1|0.014951236546039581|1|0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|4695.5556640625|42260|7126.66650390625|62040|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||||||0|0||||||||||
We found that some of the data field become 'null' after the load action finished, if we load the same data again, problem disappeared, we can't 100% reproduce this issue each time, we don't know why, Anybody here can help us to solve this issue or give us some clue?
Trying to learn how to use Sql Server CE, Tutorial says to open server explorer, add connection, new connection, select data source as .net sql ce (words to that effect). On my visual studio, it ain't there and I can't figure out how to get it there. I have uninstall all of the Sql Ce stuff and reinstalled it. So I'm missing some key link. What are the magic incantations to get to first base?
Hi... Please help. I am having problem with my hard drive so I git a new one. I installed a new instance of SQL Server 2005 and copy over all my projects from the old hard drive into the new.
The problem is, when I open my packages specifically the data flow task it is empty. All my dataflow items are gone.
I am not sure what I am missing during the copy. Please help how I can recover my complete packages.
I have an xml stored in a coulum of a table and I use the following query to extract an xml element :
select CONVERT(XML,CONVERT(NVARCHAR(max),Response)).value('(/Quote/error)[1]','nvarchar(max)') as Excepiton .The result of the expression is :
TL43:The product has no marked price.;I would like to select only the code : TL43and then separately I would like to select The product has no marked price.Is there a way I can do it?
I am using FOR XML to generate the HTML and following is my query. I am using td { white-space:nowrap;} for the columns to used no wrap. But i want two columns which need to be wrapped. i have tried lots of options.
Hello,We have a query which returns ~2.8 million rows. This same query isused in a DTS package, which exports to a text file. The number ofrows in this text file, however, is ~2.7 million rows (I'm rounding ofcourse.) So a good chunk of data vanished in the export it appears.Using SQL Server 7.0 on Windows 2000.Anyone see bugs w/ DTS text exports for very large amounts of data?Thanks,DF"Never eat more than you can lift." Miss Piggy
In SQL 2012, this fails with the error message, cannot find the text qualifer for field.
To get around this, we are having to import the data into a Dirty Data column of aTEMP table, ID, Dirty Data, Clean data - perform multiple updates and change the text qualifier and ensure they are only changed in the right places so we can keep the ". In this example, we changed the text qualifier to PIPES.
After these updates, we then export the data from CLEAN data back out to CSV, then reimport it into the origional destination table with a new text qualifer.
Full Text Searches are working on my test server, but not on my production server. My test scenario is as follows:
CREATE FULLTEXT CATALOG FTC_Test AS DEFAULT AUTHORIZATION dbo
CREATE FULLTEXT INDEX ON guest.FtsTest(FullName) KEY INDEX PK_FtsTest ON FTC_TestI wait briefly and then check to see if the index has been populated: SELECT * FROM sys.fulltext_indexescrawl_end_date is not null,
So I'm assuming I don't have to wait anymore before I try some FTS searches. Right? I can't get any queries to return anything, though.
The following tells me the full text item count for the table is zero:
DECLARE @TableId INT SELECT @TableId = id FROM sys.sysobjects WHERE [Name] = 'FtsTest' SELECT OBJECTPROPERTYEX(@TableId, 'TableFulltextItemCount') AS TableFulltextItemCount
As mentioned, the full text search works on my test server. Both of them are SQL 20012 SP1 (11.0.3000) x64 running on WinServer 2008 R2 SP1.
I am new to Full-Text Searches and FileTables. I am working on a resume search system. I created a FileTable and the Share which contains about 51,000 resumes in Word (DOC/DOCX), PDF and TXT formats that were copied over from our main file server. I followed the instructions on adding the Microsoft Filter Pack 2.0, and all searchable file types are listed in sys.fulltext.document_types.
I have a program that searches resumes by First and Last Name and/or E-mail Address. For a while, we thought it was working fine because we were getting hits, but then we noticed some Names and E-mail Addresses we new existed were not showing up in my CONTAINS queries -- but we verified the documents do exist and the Names exist in the File Name and inside the document along with the E-Mail address.
One example I'm working on now cannot find the First/Last Name with CONTAINS(name,"lastname") in a Word DOC file, but I can find the document using LEFT(name,5) = 'Smith'. It's a Word DOC file. For testing purposes, I opened the DOC file in Word, then used Save As to create a TXT and DOCX file within the same folder. Now, both of those are showing up in the CONTAINS but not the original DOC file.
I'm standardizing street addresses and i think full text search, maybe, work fine with my problem. My problem is that I have 2 millon of errors and using "LIKE" dont work correctly. This is what i have now:
UPDATE addresses SET standarAddress = REPLACE (addresses.streetAddress, errors.streetAddress, corrects.streetAddress) FROM corrects INNER JOIN errors ON corrects.id = errors.idCorrects
'SELECT E.EmployeeID FROM dbo.EmployeeGroupMapToEmployee E, dbo.Per_Budget B WHERE E.EmployeeID = B.PER_PERSONAL_ID AND B.PEB_Budget_id = 243 AND E.EmployeeGroupID IN (SELECT H.Id FROM dbo.EmployeeGroup H WHERE H.InstitutionsId = 22) GROUP BY E.EmployeeID '
If i Replace @EmpFilterAddDuty with this in a QUERY, it gives me the expected result, but if i try to execute the stored procedure.:
DECLARE@return_value int EXEC@return_value = [dbo].[EP_Conterbalances] @Start_Date_For_Totals_Date = N'20120831', @EmpFilterAddDuty = 'SELECT E.EmployeeID FROM dbo.EmployeeGroupMapToEmployee E, dbo.Per_Budget B
[Code] .....
I get this error code:
Conversion failed when converting the varchar value 'SELECT E.EmployeeID FROM dbo.EmployeeGroupMapToEmployee E, dbo.Per_Budget B WHERE E.EmployeeID = B.PER_PERSONAL_ID AND B.PEB_Budget_id = 243 AND E.EmployeeGroupID IN (SELECT H.Id FROM dbo.EmployeeGroup H WHERE H.InstitutionsId = 22) GROUP BY E.EmployeeID ' to data type int.
I really do not understand why SQL 2012 tries to convert the value to an int, and I want to know how to pass the text string.
how to import a text file with a list of NI numbers into a new table with a column to list all the NI numbers? I think I use the Select INTO clause, but not sure how to do this?
I'm trying to define a linked server using Microsoft OLE DB Provider for Data Mining Services. MSDMine data provider is missing. Do you know where I can get this? Thanks in advance.
I'm have a Xeon 64bit proc. using SP2. And had installed:
SQLServer2005_ADOMD_x64.msi, SLQServer2005_ASOLEDB9_x64.msi,SQLServer2005_OLAPDM_x64.msi, and SQLServer2005_XMO_x64.msi.
After some weeks evaluating tools and platforms for developing an application, I decided to move to SQL Server 2005 Express Edition. Everything was fine till last night, when after creating my tables, I needed to populate them. I tried to find the Import and Export Data Wizard that SQL Server 7.0 and 2000 used to have, but great was my surprise when I found - in this forum - a post that said that it's not available in the Express Edition.
I'll have to move back in time (what I hate) to remember the way BCP worked. Can somebody post some examples to not start from zero ? Does anybody know a third party visual tool that can import/export data from text files to a SQL Server DB via ODBC ?