I need to write a statement that returns the name, city, and state of each vendor that’s located in a unique city and state. In other words, I can not include vendors that have a city and state in common with another vendor.
Im trying to upload 1000 txt files into one table in SQL. I'm using the following query, to upload one txt file at a time:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (4).txt' with (firstrow = 2, lastrow = ???, fieldterminator = ';', rowterminator = '0x0A')
I'm trying that the query skip the last row because gives me the following error:
Msg 4866, Level 16, State 1, Line 1 The bulk load failed. The column is too long in the data file for row 1, column 17. Verify that the field terminator and row terminator are specified correctly. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
know a command to skip the last row, something like lastrow= all-1...or something like that.
I also executed using MAXERRORS command...like this:
bulk insert [dbo].AAA_2013_2015 from 'dataserverSQL Data FilesSQL_EMELIZFC x Bloque Detallada201308 Detalle FacturasFACT_BLOQ_AGO13 (15).txt' with (firstrow = 2, fieldterminator = ';', MAXERRORS = max_errors, rowterminator = '0x0A')
does not recognize MAXERRORS command, also tried to put a number of error instead of max_errors.
In sp_create_plan_guide documentation, it's written:
When SQL Server matches the value of statement_text to batch_text and @parameter_name data_type [,...n ], or if @type = 'OBJECT', to the text of the corresponding query inside object_name, the following string elements are not considered:
White space characters (tabs, spaces, carriage returns, or line feeds) inside the string. Comments (-- or /* */). Trailing semicolons
On SQL Server 2008 SP3, I created a plan guide for a query. Now, if I execute the query exactly how it was defined in the plan guide, SQL Server match it and use the plan guide to optimize the query.
However, if I add just a space between a column name and an operator in the WHERE clause, the plan guide is ignored. How come it doesn't ignore the extra space, like mentioned in the documentation?
I have two table 'Cal_date' and 'RPT_Invoice_Shipped'.Table cal_data has column month_no, start_date and end_date. And table RPT_Invoice_Shipped has columns Day_No, Date, Div_code, Total_Invoiced, Shipped_Value, Line_Shipped, Unit_Shipped, Transaction_Date.
I am using below insert statment to insert data in RPT_Invoice_Shipped table.
insert into [Global_Report_Staging].[dbo].[RPT_Invoice_Shipped] (Day_No, Date, Div_code, Total_Invoiced, Transaction_Date) select , CONVERT(DATE,Getdate()) as Date, LTRIM(RTRIM(div_Code)), sum(tot_Net_Amt) as Total_Invoiced, (dateadd(day, -1, convert(date, getdate()))) from [Global_Report_Staging].[dbo].[STG_Shipped_Invoiced] WHERE CONVERT(DATE,Created_date )=CONVERT(DATE,Getdate()) group by div_code
while inserting in column Day_No in RPT_Invoice_Shipped table, I have to use formula (Transaction_Date-start_date+1) where Transaction_Date from STG_Shipped_Invoiced and start_date from Cal_date table. I was using datepart (mm, Transaction_Date) so it gives month_no, and this month_no we can join with month_no of Cal_date table and fetch start_date from Cal_date table, so that we can use start_date for formula (Transaction_Date-start_date+1).
But I am getting difficulty to arrange this in above query. how to achieve this?
I'm creating a sql stored procedure inside this proc it returns some information about the user, i.e location, logged in, last logged in, etc I need to join this on to the photos table and return the photo which has been set as the profile picture, if it hasn't been set then return the first top 1 if that makes sense?
The user has the option to upload photos so there might be no photos for a particular user, which I believe I can fix by using a left join
My photos table is constructed as follows:
CREATE TABLE [User].[User_Photos]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [UserId] [bigint] NOT NULL, [PhotoId] [varchar](100) NOT NULL, [IsProfilePic] [bit] NULL,
[Code] ....
Currently as it stands the proc runs but it doesn't return a particular user because they have uploaded a photo so I need to some how tweak the above to return null if a photo isn't present which is where I'm stuck.
use mysql1 declare @bp xml select @bp=xml ;WITH XMLNAMESPACES('http://schemas.openehr.org/v1' as bp,'http://www.w3.org/2001/XMLSchema-instance' as xsi,'OBSERVATION' as type) select * from ( select m.c.value('(./bp:data/bp:items[1]/bp:value[1]/bp:magnitude)[1]','int') as systolisch from BloodpressureMitSchema cross apply @bp.nodes('/bp:content/bp:data/bp:events') as m(c))m
But with this "cross apply" I can only query all the values in one xml and repeat them. Is there something wrong at "declear"
I've look at several different methods for removing leading zero's from a column but I need to remove trailing data from a VARCHAR column. For some reason, the old database saved the time along side the date in my client's app.
For example:
The old database format "2015-07-28 00:00:00"
I need the data in this column in the new database to only be the date "2015-07-28", there are alot of rows with this issue.
Is there a query I can run to remove the 00-00-00 from all of the rows? Some of the fields actually have a time in there like this: 2015-07-28 12:15:35, with this one, I don't think it's going to be easy but if I could at least remove the 00-00-00 from all the rows that have it, that would be a good start.
Every sunday, new data will be loaded from temptable to main table. I have to make sure that, duplicates does not get loaded from temptable to maintable.
For example, if last sunday a record gets loaded from temp to main. If this sunday also the same record is present then it means that is a duplicate.
The duplicate is decided on below scenario
select 'CodeChanges: ', count(*) from CodeChanges a, CodeChanges_Temp b where a.AccountNumber = b.AccountNumber and a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and a.HexaId = b.HexaId and
[Code] ...
Yesterday (Sunday) , data from temp got loaded onto maintable but with duplicates.
There is a log which just displays number of duplicates.
Yesterday the log displayed 8 duplicates found. I need to find out the 8 duplicates which got loaded yesterday and delete it off from main table.
There is a column in both tables which is 'creation date and time'. Every Sunday when the load happens this column will have that day's date .
Now i need to find out what are all the duplicates which got loaded on this sunday.
The total rows in temp table is : 363 No of duplicates present is : 8
I used below query to find out the duplicates but it is returning all the 363 rows from the maintable instead of the 8 duplicates.
Select 'CodeChanges: ', * from CodeChanges a where exists ( Select 1 from CodeChanges_Temp b where a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and
[Code] ...
Need finding the duplicate records which has creation date time as '2015-11-01 00:00:00.000' and all the above columns mentioned in the query matches.
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
I have several databases to deal with, all with + 250 tables. The databases are not identical and do not conform to a specific naming convention for table names. Most but not all tables have a column called "LastUpdated" containing a date/time (obviously). I'd like to be able to find all rows within a whole database (table by table) where the date/time is greater than a specified date/time.
I'm looking for a reliable query that will return all the rows in each of the tables but without me having to write hundreds of individual scripts "SELECT * FROM [dbo.xyz] WHERE LastUpdated > '2015-01-01 09:00:00:000'", or have to look through each table first to determine which of them has the LastUpdated field.
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
Can we bulk insert only the desired column from a flat file to a table?
I am using SSIS to bulk insert from a file with more than 200 columns. I am trying to find a way I can bulk insert them to multiples table through SSIS.
The one way I can think is pre map the columns from the file to the destination tables. Build numerous Bulk Insert tasks to achieve that. But not sure if SSIS will let me do that.
A common partitioning scenario is when the partition column has the same value for every record in the partition, as opposed to a range of values. Am I the only person who wonders why there isn't an option to automatically partition a table based on the unique values of the partition column? Instead of defining a partition function with constants, you ought to be able to just give it the column and be done. This would be particularly valuable for tables partitioned on a weekly or monthly date; when new data is added it could simply create a new partition if one doesn't already exist.
Andrew Worral writes "I am currenly working on a website that uses Full Text search to search the name of companies.
We are having trouble figuring out what tools are best suited for this with SQL 2005 Standard/Enterprise and how to implement them.
The first issue to address would be Misspelling of words. Such as Looking For "Davids Shoe Repare" and returning "David's Shoe Repair"
Besides the spelling in "Davids Shoe Repare" there is also the issue of the " ' " in David's which we have not come up with a good solution for yet. So a search for David will not returns "David's"
I have done a little looking into Fuzzy matching with Integration Services but I am not sure this is the right tool, nor am I sure of the overhead involved and any speed issues with this. Nor am I in any way overly familiar with Integration Services.
I'm not quite sure that this is possible but, I figured that I would check with you experts out there before trying a new approach. I've done quite a bit of research and have not seen anyone quite figure this out yet.
We have a SQL Server 2005 application that stores and indexes documents to the database as an image data type. I'm able to do full-text queries against the documents without any trouble. I begin to run into problems when trying to pattern match social security numbers and drivers licenses stored in a full-text index. I have a user defined function that I call which runs my regular expression that checks for hits of a ssn or license number in the index. I have no problem getting hits when the data sits in a column.
I do need to mention that I have no trouble when searching for a ssn with a fixed value and where I know the ssn (ex: 123-45-6789). I am actually trying to find the existence of the pattern of ###-##-#### (ex: ^d{3}-d{2}-d{4}$) anywhere in the index. Any help would be very much appreciated.
I am getting an error importing a csv file both using SSIS and SSMS. The csv is comma delimited with quotes for text qualifiers. The file gets partially loaded and then gives me an error stating The column delimiter for column "MyColumn" was not found. In SSIS it gives me the data row which is apparently causing the problem but when I look at the file in a text editor at the specific row identified the file has the comma delimiter and it looks fine. I am using SQL Server 2008.
Why this won't work. POST_DT is nvarchar(50). I know it should be datetime but, I have no control over it. That's why I'm creating a temp table. But, I want to only insert the most recent invoice. There is code to create test data at the end.
I've below value in a column with data type - TEXT
QU 221025U2V/AN G-DT DL A 5 1A- 11,5,SF,230,30162,LZ,2,118,0,0,10170,25,06
This text value has some special characters in it. and I could not paste the exact value as this text box is not allowing me to do so. So, for reference I've attached a screenshot (Capture.png) of the value.
I want to fetch last two values from this text i.e. 25 and 06. (It can be anything like 56R,06T but will be the last two values separated by comma)...
Hi everyone,I encountered an error "Need to run the object to perform this operationCode execution exception: EXCEPTION_ACCESS_VIOLATION" When I try to import data from Oracle to MS SQL Server with EnterpriseManager (version 8.0) using DTS Import/Export Wizard. There are 508 rowsin Oracle table and I did get first 42 rows imported to SQL Server.Anyone knows what does the above error message mean and what causes therest of the row failed importing?Thanks very much in advance!Rene Z.--Posted via http://dbforums.com
I m using SQL Server 2000. I have Tabel named Topic and have a column name lineage. lineage has data like following: //////546707//546707//546707/43213/ Now I want to get records who has only one "/" in it's crreponding lineage column. Can somone tell me how to do that in SQL Server 2000? Thanks Khushbu
I am running SQL 2008 R2. I have SQL Full-Text Search installed (part of a standard Automated build) - but disabled and switched-off as the applications don't need it. However as a result I get SQL Server error 9954:
SQL Server failed to communicate with filter daemon launch service (Windows error: Windows Error: hr = 0x80070422(failed to retrieve text for this error)). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.
appear in the event / error logs everytime I start SQL server. Is it possible to suppress this error (other than by starting the SQL Full-text service) ...
I have a table with contains Text column. and which is accessed by each page of the site. It is getting frequent Dead Lock Any body can suggest me a good solution pl.
I am using DTS to transfer some tables from one server to another as part of a migration. We want to be down for as little time as possible, but we need the most up-to-date copy of the database tables in question.
I am currently testing the transfer process in our test environment by migrating the data from one database to another on the same SQL instance.
There are 7 tables to transfer and the total size of the database is 450 MB (with around 117 MB used). The two largest tables have around 17,000 records each.
One table (the header) has no text column and it takes just a few seconds to transfer. The other table (the detail) has two columns, one of which is a text column (actually, its not fair to call it the detail table; the relationship is actually one-to-one, but for the sake of this discussion, let's leave it at that).
The header takes seconds to transfer, but the detail takes up to 18 minutes.
Physically, our test server is quite robust; 2 processors, a 3 disk RAID-5 for the data files and a separate RAID 1 partition for the logs. Performance counters don't indicate any real issues: during the transfer, the disk utilization on the data partition occasionally spikes to a high level, but comes right back down until the next spike (the spikes being separated by about 1 minute. No issues with memory, paging or CPU.
I have removed the clustered index on the affected table as well as the PK. No help.
Are text columns just slow? Is there something that I am missing?