have a dts package that does txt -> sql server.
i have 200 txt files with the same exact format.
just want to know if i can write a SP passing a parameter that loads this txt files. because i dont wanna create 200 packages or 200 sources to load 200 txt files.
say:
exec SP_loadTXT txt1
or should i use bulk insert?
any approaches are fine. any suggestions are fine too.
I have to archive the Flat files after they are loaded into an archive folder with a Date time stamp on the folder in the format mmddyyyyhhmmss what task should i use in ssis to complete this and can i do this task for multiple files in the directory, how should i configure such that all files are archived and placed in one directory with a current timestamp, Please Provide me with a solution
I am kind of confused about the way SQL Server 2000 handles the hintsthat users supply with their SQL statements.[color=blue]>From BOL, it seems that one can specify them with "WITH (...)" clauses[/color]in SQL statements known as table hints. Sometimes, multiple uses ofthis form in a statement is OK. Then there is the OPTION clause forspecifying statement hints. However, the documentation on OPTIONsection discourages their use.Being relatively new to SQL Server and still learning about it, what isthe general practice? Use hints or not? And if so, how (through WITHor OPTION clauses)?Cheers!
I am running SQL7 SP2 and and noticing table the query processor table scans when I ussue a between 'date1' and 'date2' instead of using the datetime index. If I put in the index hint (index = ix_datetimeXXXX) the query runs fine. My question is does this index hint restict the use of other indexes in the query and secondly how can I specify multiple index hints? Thanks in advance.
Whilst running a query I recieved the error below. Cannot create a worktable row larger than allowable maximum. Resubmit your query with the ROBUST PLAN hint.
My VB.Net (with SQL Server backend) application currently allows more than one user to look at a particular record at the same time. This is not a problem unless both those users also try to update that record as well. One user's changes then overwrite the other's.
I've been reading up on locking hints but my database knowledge is a little scant and I'm also rather dense and need things spelling out for me!! So I have a few questions that I hope someone can help with:
If I add an updlock to my update SQL statement, this would allow both users to view the record but would only allow one user's changes through. Is that correct?
For the other user, would SQL Server return an error message that I can use to tell the user that their update has not worked?
Would I have to get my VB.NET application to re-get the record information so that the user who's update failed can see the changes made by the other user and reapply their own changes?
Does the updlock become unlocked once the record is updated or do I need to specifically unlock it somehow?
I am having problems with doing what seams to be a very easy query. For some reason the SQL Server is trying to do nested loops instead of hash join. I tried to force the use of the hash join using the join hint.
Query 1
select * from DIM_DATE DD inner hash join ( select A.student_key, CONVERT(int, CONVERT(varchar, COALESCE (A.date_withdrawn, getdate()), 112)) AS date_withdrawn_current FROM FACT_STUDENT AS A ) SSE on DD.date_key= date_withdrawn_current This query gives an error:
Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN
Second query is not really what I want but it illustrate that it works fine when getdate() is not used.
Query 2
select * from DIM_DATE DD inner hash join ( select A.student_key, CONVERT(int, CONVERT(varchar, COALESCE (A.date_withdrawn, A.date_enrolled), 112)) AS date_withdrawn_current FROM FACT_STUDENT AS A ) SSE on DD.date_key= date_withdrawn_current Is there some problem with using function getdate() ? It works fine in SQL Server 2000
This problem occurs on the SQL Server 2005 SP2 ( 9.00.3050.00 (X64) ) and (9.00.2050)
I noticed that the online books say the following: Note The SQL Server query optimizer automatically makes the correct determination. It is recommended that table-level locking hints be used to change the default locking behavior only when necessary. Also, at another place in online books, it says: The table hints are ignored if the table is not accessed by the query plan. From the above, it seems that using locking hints is not going to guarantee that SQL Server will follow them. Is this true?
Why SQL server dose work as follows when I dose not provide any join hints? It looks like HASH join is the best plan, but SQL dose not. What kind of JOIN method is used by SQL optimizer?
Thanks in advance, Wonhyuk William Chung wonhyukc@usa.net MCSE/ MCT
----------- use northwind go select orderid, CompanyName --productname, from orders o inner join customers c on o.customerID = c.CustomerID /* Table `Orders`. Scan count 91, logical reads 184, physical reads 0, read-ahead reads 0. Table `Customers`. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0. .0553 */
hi all, I'm trying to run queries on relatively small tables (a few hundred thousand rows) with subqueries of counts per primary key columns as such:
(ColA in tableA is the primary key)
select * from tableA p where exists (select 1 from ( select ColA, count(1) cnt from TableA group by ColA having count(1)>1 ) t where t.ColA= p.ColA) order by some_col
my problem is that sqlserver 2005 sp5 does not materialize the internal subquery properly, or execute it beforehand and it gets confused as heck and pegs the CPUs at 100% forever.
What hints can I use to solve this issue? I've tried to use ..... "with ...." to prepare/materialize the table upfront, no luck, one version of statement pegged one cpu at 100%, while the other statement pegged ALL cpu's at 100% -- don't remember which.
My only solution right now was to create these subqueries as PHYSICAL tables -- and this would solve the problem but that would entail creating a lot of un-necessary objects.
There is a trace flag that tells SQL Server to ignore index hinting in incoming queries. I'm having a Monday morning problem and I can't remember the trace number nor find it in my notes. Can anyone else come up with it?
I've got a SELECT WITH (UPDLOCK, ROWLOCK) WHERE followed by an UPDATE WHERE statement. The results of the SELECT statement are deserialized in C# and updates are made to the deserialized object. Then the object is serialized back into the table with the UPDATE statement. I've got this code running within a transaction scope with the ReadCommited isolation level.
My service receives requests to update data and the requests can come in on different threads. What I'm seeing, is that once in a while, the log messages from my application indicate that two different threads are able to issue the above SELECT statement and both are receiving results. This is a problem since the thread that issues the last UPDATE will overwrite the changes made by the first. Each thread has its own connection and transaction scope.
I've researched all over the place and have tried a few different things, but all things point to the fact that query hints are just hints and that SQL may or may not pay attention to them. If that's the case, how am I suppose to perform a SELECT with the intention of updating so that no one else can do the same? I haven't tried table level locking, but I'd really like to avoid that if possible.
Using Merge replication, I have a table that is filtered using the HOST_NAME() function. The filter also makes use of a function (as the HOST_NAME() is overriden to return some complex data).
Everything replicates and filters just fine. but when I add a join filter on a different table (the join filter is a simple foreign key join) I get the following error when the snapshot agent is run:
Message: Conflicting locking hints are specified for table "fn_GetIDList". This may be caused by a conflicting hint specified for a view. Command Text: sp_MSsetup_partition_groups Parameters: @publication = test1
fn_GetIDList is the function used in the original filter.
I would like to implement a kind of standard packages which can be used in all other processes and will be started using the variables.
But I do not know where to store these kind of packages in "best practise", because we
- would like to use them in Dev and in "Real" also without having to change something in the other processes
- we are storing the packages in the folders of the package store
and as far as I understood I would have to share the package store to all developers though that they would be able to do this?
Then I would better choose another folder with defined access rights I think...
Or would it be better to spend some time in developing a custom component? But this component would work with recordsets rather than the standard data flow elemtents and therefor I would expect a leak of performance... Or is it possible to do "trasnformation" from a packae to a custom component?
This article instructed me on how to process rows from a table used as a data queue for multiple processes.
http://www.mssqltips.com/tip.asp?tip=1257
I tested this against the AdventureWorks DB (SQL 2005) and multiple SQL connections inside of Sql Mgmt. Studio).
Connection1:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) --skips over locked rows --COMMIT TRANSACTION
Connection2:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) --skips over locked rows
COMMIT TRANSACTION
This works like I want where connection 2 skips over the locked row from connection 1 and gets the next available record from the table / queue. However, when I add ORDER BY tsql to each sql statement, connection 2 is now blocked waiting for Connection 1 to commit. (This is not what I want)
Connection1:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) order by DueDate
--COMMIT TRANSACTION
Connection2:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) order by DueDate --is blocked until connection 1 commits transaction
COMMIT TRANSACTION
How do I prevent blocking when using these locking hints with ORDER BY?
I'm running SQL Server Agent services and SQL Server Integration services under same DomainUser account and the same DomainUser account is also having sysadmin rights in sql server.
And below is the SQLAgent.OUT file details which I'm getting while running the SSIS from SQL Agent job.
[100] Microsoft SQLServerAgent version 9.00.1399.06 (x86 unicode retail build) : Process ID 4012 [101] SQL Server MachineNameXYZ version 9.00.1399 (0 connection limit) [102] SQL Server ODBC driver version 9.00.1399 [103] NetLib being used by driver is DBNETLIB.DLL; Local host server is [310] 4 processor(s) and 3072 MB RAM detected [339] Local computer is MachineNameXYZ running Windows NT 5.2 (3790) Service Pack 1 [432] There are 11 subsystems in the subsystems cache [125] Subsystem 'ActiveScripting' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'CmdExec' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'Snapshot' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'LogReader' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'Distribution' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'Merge' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'QueueReader' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'ANALYSISQUERY' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'ANALYSISCOMMAND' could not be loaded (reason: The specified module could not be found) [125] Subsystem 'SSIS' could not be loaded (reason: The specified module could not be found) [364] The Messenger service has not been started - NetSend notifications will not be sent [129] SQLSERVERAGENT starting under Windows NT service control [260] Unable to start mail session (reason: No mail profile defined) [396] An idle CPU condition has not been defined - OnIdle job schedules will have no effect [LOG] Step 1 of job 'TestJob' (0x4A197E88EAD5134581A35132A546414C) cannot be run because the SSIS subsystem failed to load. The job has been suspended [LOG] Unable to read local eventlog (reason: The parameter is incorrect) [LOG] Unable to read local eventlog (reason: The parameter is incorrect) [LOG] Step 1 of job 'TestJob' (0xE412C2AB10C7A34B87F5DFC8CFD978CA) cannot be run because the SSIS subsystem failed to load. The job has been suspended [LOG] Unable to read local eventlog (reason: The parameter is incorrect)
In case some of you have read my previous posts, you may be aware that I'm writing a webboard application as a replacement for the old one.The old one currently have approximately 50000 topics in the database, each has on average 10 replies (I just check recently. I though it was only 7000 topics).I need to provide paging and sorting feature to the topic list. But I can't just SELECT all of them and let GridView do the paging/sorting, right?I have been using stored procedures to store my SQL statement for several projects now. I know how to deal with the paging feature (ROW_NUMBER), but the sorting requires me to change to change the "ORDER BY" clause.1. Can somebody tell me how to change the ORDER BY clause in the stored procedure(s) at runtime? Or does anyone have other approach? Currently I'm thinking about moving back from store procedures to hard-code SQL statements, and then modify/generate the SQL statement for each paging / sorting. But I've learn that stored procedures give more performance and security.2. According to the situation I provided, is it worth moving from stored procedures to hard-code SQL?I'm also using 3-tier architecture approach + OOP. But I reach a conflict in my thoughts. You see, according to OOP, I'm supposed to create classes that reflect the actual objects in the real-world, right? In my case the classes are "Board, Topic, Reply, ...." According to this and 3-tier approach, I intend to use ObjectDataSource as a bridge between Presentation Logic and Business Logic. But I wonder what my datasource class should return3. Should my data source class return data objects like1st approach[DataObject(True)]pubic class TopicDataSource{ public static Topic[] GetTopicList() { }}or should it return DataSet / DataTable / DataReader like2nd approach [DataObject(True)]public class TopicDataSource{ public static DataTable GetTopicList() {}}Personally I think approach 1 is more OOP and allow for more extendability, but approach 2 might be faster.4. If I go with approach 1, how should I control which property of my data objects is read-only after it's has been inserted/created? Can I just set my data object's property to be readonly? Or do I have to set it at page level (i.e. GridView-> Columns -> BoundField -> ReadOnly=True)? Or do I set it and the page level and write a code to throw an exception in the rare case the application / user try to change it's value? Or else?Please help. These questions slow me down for days now.If there's any concepts that I misunderstood, please tell me. I'm aware that I don't know as much as some of you.I will be extremely grateful to anyone who answer any of my questions.Thanks a lot.PS. For those who think my questions are stupid, I'm very, very sorry that I bother you.
I seem to have somehow got myself into a situation where I'm having to run the following SELECTs, one after another, on a single ASP page. This is not tidy. How can I join them all together so I get a single recordset returned with all my stats in different columns?
SELECT COUNT(*) FROM tblQuiz WHERE [q3] = '5 years +' OR [q3] = '2 - 4 years' SELECT COUNT(*) FROM tblQuiz WHERE [q4] <> '' AND [q4] IS NOT NULL SELECT COUNT(*) FROM tblQuiz WHERE [q5] = 'Unhappy' SELECT COUNT(*) FROM tblQuiz WHERE [q6] = 'Yes' SELECT COUNT(*) FROM tblQuiz WHERE [q7] = 'Yes' SELECT COUNT(*) FROM tblQuiz WHERE [q8] <> '' AND [q8] IS NOT NULL
Hello all,I have a database in SQL Server that should save data from a CRM-likeapplication.The database consists of tables like products, services, customers,partners etc. Problem is that the users should be able to find theseitems on different properties and with or without substring finding(SQL: LIKE). Example: I want the users to be able to find a customer,providing a customerID, but also providing a customername, zipcode orjust a part of those strings.This will result in a lot of queries. I bet there are some nicesolutions to this, since I will not be the first with this situation.If anyone can help, please.Thank you in advance.Regards,Freek Versteijn
Apparently, deleting 7,000,000 records from a table of about 20,000,000 is not advisable. We were able to take orders at 8:00AM, but not at 7:59.
So, what's the best way of going about deleting a large number of records? Pretty basic lookup table, no relationships with other tables, 12 or so address-type fields, 4 or 5 simple indexes. I can take it down for a weekend or night, if needed.
DTS the ones to keep to another table, drop the old and rename the new table? Bulk copy out, truncate and bring back in? DTS to text, truncate and import back? Other ways?
Never worked with such a large table and need a little experienced guidance.
I am planning an application where ~1000 companies will be accessing data. Should I use a key to identify the company and place all data in one table i.e (WHERE company =123) or should the application create company specific tables i.e should I have 1000 small tables with 100 records in each, or one table with 100,000 records?
You know how there are lots of hosted applications out there, many of them provide you with your own database (not shared).
1. If a server has 1K databases on it, will this slow down the server just due to the # of databases? (each user has their own database, but they won't be accessing it that much really).
A seperate database is required for security purposes usually.
I have a strange situation. Performance monitor shows that SQLServer:Transactions Transactions value is 125, but SQL Server Profiler does not show any activity.
I ran sp_who2 and I have a bunch of processes with SUSPENDED status. Would those be counted in Performance monitor?
I used MS SQL Server Management Studio Express to back up my SQL Server 2005 database called "PMDB" on a server in my office to a jump drive. Then, I removed the jump drive from the server and plugged it into my laptop. I then tried using MS SQL Server Management Studio Express to restore "PMDB" to my laptop SQL Server 2005 Express Edition (instance "Primavera") to a database called "pmdb$primavera". I've had several issues:
1. pmdb$primavera is now "locked up" "in the middle of a restore". I can't seem to unlock it even by rebooting my laptop. 2. when I execute the restore (I've used several command sequences...) I get a message that there are additional "families" restore is waiting for. I don't understand that. 3. I tried restoring to a completely new database (in the same instance...), but got the "family" problem. The query just hangs forever. When I cancel it, it hangs the database (see #1). 4. I now have two databases in the "Primavera" instance on my laptop in the "restoring" state. I can't get to either one of them.
I'm desperate. I have a customer presentation on Thursday where I must have the database working. Help!
Here's one of the queries I executed.... though I tweaked items here and there for 3 and 4 above.
restore database pmdb
from disk='f:databasebackupPMDB-BAK.BAK'
with recovery,
move 'pmdb_Dat' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_DAT.MDF',
move 'pmdb_log' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_LOG.LDF'
Every time a transaction log is dumped we see the following message in the log file:
BackupDiskFile:penMedia: Backup device '\s-sqlbkups-1g$myserverlogmy_databasemydatabase_backup_200711071430.trn' failed to open. Operating system error 2(The system cannot find the file specified.).