I have table1 with col1 varchar,col2 int , col3 xml , col4 bit ...best way to fetch the col3 into file (.txt or .sql) into seprate file for each col3 record . I want to export this in different files for each record if possible with date and time stamp .
I have a table with about half a million records, each representing a patient in my county.
Each record has a field (RRank) which basically sorts the patients as to how "unwell" they are according to a previously-applied algorithm. The most unwell patient has an RRank of 1, the next-most unwell has RRank=2 etc.
I have just deleted several hundred records (which relate to patients now deceased) from the table, thereby leaving gaps in the RRank sequence. I want to renumber the remaining recs to get rid of the gaps.
I can see what I want to accomplish by using ROW_NUMBER, thus:
SELECT ROW_NUMBER() Over (ORDER BY RRank) as RecNumber, RRank FROM RPL ORDER BY RRank
I see the numbers in the RecNumber column falling behind the RRank as I scan down the results
My question is: How to convert this into an UPDATE statement? I had hoped that I could do something like:
UPDATE RISC_PatientList_TEMP SET RRank = ROW_NUMBER() Over (ORDER BY RRank);
but the system informs that window functions will only work on SELECT (which UPDATE isn't) or ORDER BY (which I can't legally add).
I have a de-normalized table that I need to export to XML using For XML, but put all of the related rows under the same node.The table is alot more complicated than the example below, but for proof of concept purposes, i'll keep it really simple:
Is there an existing option that deals with this automatically, or do I essentially need to do a group by to output the campaign element, and then union an ungrouped select to output the price element?
I need to query to return a result for each unique machine with the latest date. The example result below would be returned because they have the latest date.
MachineA 5/7/2011 MachineB 5/5/2010
Select Distinct would almost do it, but I need each unique machine that has the latest date.
i m creating one google map application using asp.net with c# i had done also now that marker ll be shown from database (lat,long)depends on the lat,long i wanna display customer,sales,total sales for each makers in html table format.
Now I want the records having flag2=1 only.. I.e ID=3 has flag2=1 where as ID = 1 and 2 has flag1 and flag3 =1 along with flag2=1. I don't want ID=1 and 2.
I can't make ID unique or primary. I tried with case when statements but it I am somehow missing the basic logic.
I have 2 tables in a 1: n relation. How can i get a select statement that the field in the n-relation with outputs, separated by a semicolon; Example: One person have many Job Titles
Table1 (tblPerson) Table2 (tblTitles) 1, "John", "Miller", "Employee; Admin; Consultant" 2, "Joan", "Stevens", "Employee, Software Engineer, Consultant" and so on .... 1 in select statement:
any useful SQL Queries that might be used to identify lists of potential duplicate records in a table?
For example I have Client Database that includes a table dbo.Clients. This table contains various columns which could be used to identify possible duplicate records, such as Surname | Forenames | DateOfBirth | NINumber | PostalCode etc. . The data contained in these columns is not always exactly the same due to differences caused by user data entry; so some records may have missing data from some of the columns and there could be spelling differences too. Like the following examples:
1 | Smith | John Raymond | NULL | NI990946B | SW12 8TQ 2 | Smith | John | 06/03/1967 | NULL | SW12 8TQ 3 | Smith | Jon Raymond | 06/03/1967 | NI 99 09 46 B | SW12 8TQ
The problem is that whilst it is easy for a human being to review these 3 entries and conclude that they are most likely the same Client entered in to the database 3 times; I cannot find a reliable way of identifying them using a SQL Query.
I've considered using some sort of concatenation to a new column, minus white space and then using a "WHERE column_name LIKE pattern" query, but so far I can't get anything to work well enough. Fuzzy Logic maybe?
the results would produce a grid something like this for the example above:
ID | Surname | Forenames | DuplicateID | DupSurname | DupForenames 1 | Smith | John Raymond | 2 | Smith | John 1 | Smith | John Raymond | 3 | Smith | Jon Raymond 9 | Brown | Peter David | 343 | Brown | Pete D next batch of duplicates etc etc . . . .
I have a table that I need to do some computations on all the data but first I need to remove the duplicate records and insert the results into a destination table. Here's the example below. My table has 3.1 million rows. I have tried using the DISTINCT and the GROUP BY but both ways to select the data takes about half a minute to run. I'm wondering if there is a way to increase performance. Users are ok with this time since the process runs overnight but improving it won't hurt. I do have a clustered index on these fields but that doesn't seem to improve any.
I have around 3 tables having around 20 to 30gb of data. My table A related to table B by a FK and same way table B related to table C by FK. I would like to delete all rows satisfying certain condition from table A and all corresponding related records from table B and C. I have created a query to delete the grandchild first, followed by child table and finally parent. I have used inner join in my delete query. As you all know, inner join delete operations, are going to be extremely resource Intensive especially on bigger tables.
What is the best approach to delete all these rows? There are many constraints, triggers on these tables. Also, there might be some FK relations to other tables as well.
I am using Reporting Services 2000. I would like to export the report to CSV format. The column header value is a field value based which can change every month(I am trying to get the current month).
For instance, the report will show CUSTNAME JAN FEB Jamie 100 200
When I Export that to CSV fiel in ASCII encoding, I get CUSTNAME Period1 Period2 Jamie 100 200
The column headers are based on datafield name.
I understand that I can set the column header from the DataElementName property. However, I need the value to be based on a datafield value.
What's the best way to export data from SQL Server to XML format. I've taken over a VB application written to carry out this task but it seems more complicated than it needs to be. Is it possible to just miss out VB and use say a DTS program instead??
I have been given a schema(XSD) file and as far as I'm aware any xml has to be formatted according to this schema.
In one of report i am calling sub report which independently renders in excel format without problem but when it is called in main report as SubReport in Data Table Cell in that cell its giving error message Subreports within table/matrix cells are ignored.
While rendering to PDF format it is working fine.I have problem only in excel format renderings..
i am trying to convert a string like this 'le dd/mm/yyyy' into a datetime.I have removed the 'le ' part and used covert(datetime, 'dd/mm/yyyy',103) to convert into datetime. This works for example for 'le 22/11/1799' but for 'le 09/11/1716' it does not work.
select convert(datetime,RIGHT('le 22/11/1799', LEN('le 22/11/1799') - 3), 103) -> it works select convert(datetime,RIGHT('le 09/11/1716', LEN('le 09/11/1716') - 3), 103) -> it does not work
I have a dts package which is reading from a sql table and writing it to an excel file, its working fine except that I have a decimal field in the sql table but in excel file its writing it as string field.
The way I create this package is that I create a template file and I format that column to a "Number" format. Then I take this template file, rename it, export all the data to this file.
But when I open this file that decimal field is displayed as a string column and its left aligned.
Help!I have a table that has datetime format field, I exported the table toa csv while I dropped it and tried some other data, but now sqldoesn't recognise the date format for importing, heck I don't!The dates look something like:40:58.1Whick means nothing to me, or any of us here for that matter...Any ideas?John
I have a problem when exporting a report to Excel.
The problem is with the custom formatting. The report has a field named amount with its format property = C (on the properties window of the textbox in the report designer). When the user exports the report everything seems ok, calculations and so on... but the problem is when from another workbook a cell makes a reference to the cell amount of the exported report. The exported report, has this format [$-1010409]$#,##0.00;($#,##0.00) on the amount cell. In fact every format type of the report designer, begins with [$-1010409].
To reproduce this error:
Make a simple rdl with a textbox format C. Export it to excel. Create a new workbook and make a cell reference to the exported report formated textbox cell (='\ComputerFolder[ExportedReport.xls]Sheet1'!$E$15). Close the exported report and the new workbook, open the new workbook (not the exported one) and update the reference. Results in a #Ref error.
Hi all, I need to export/generate a data file in dbf format from SQL Server 2000 table. I wonder how can this be done inside SQL Server 2000? Would DTS helps? Please advise.
Has anyone out there worked on a project to export data from a SQL Server Database into the SAS JMP file format?
I want to create an SSIS package to take snapshots of our database at regular intervals and export the data directly into a SAS JMP File. I have no idea how to go about doing this.
declare @deadline Datetime = '2014-03-23 15:30:10.000' SELECT CONVERT(VARCHAR(30),@deadline, 100) AS DateConvert ----With this I am able to produce that like ----o/p: Mar 23 2014 3:30PM
I'm having a bit of an issue with a project I'm working on presently. Originally, I was supposed to be importing Excel files placed into a folder with unpredictable names, but consistent format. That was easy enough, and that's handled.
However, the project has now been expanded; I need to also import CSV files, still with unpredictable names (easy), but both the Excel and CSV files can now have unpredictable columns.
For example, I was originally told I'd be working with columns A,B,C,D,E, but now I'm working with files that can have that format, or A,B,F,G,C,D,E, or any variation as such.
For the Excel files, I can easily just do a SELECT * INTO... with a staging table and OPENROWSET, so that's no problem. The CSVs, however, I'm not so lucky with; I can't seem to use OPENROWSET for it. I could use BULK INSERT, but I can't use it in the form of a SELECT * INTO...
Compounding this is that there can be typos in the column headers, which I also just ran into
Am I missing something here? Is there means of easily adapting to the varying column header names and positions?