Need Advice For Process Of Swapping DBs
Jul 23, 2005
I need to build a *.sql script that will remove a database (let's call
it "DB1") and replace it with a brand new empty database (let's call it
"DB2").
Caveat: I don't want to be left with database "DB1" having it's files
confusingly named "DB2.mdf" and "DB2_log.ldf". These two files should
also be renamed to "DB1.mdf" and "DB1_log.ldf" so that outside
customers are not left confused. In addition, I need to be able to
restore the original DB1 if anything goes wrong during, or even after,
the entire process.
Let's assume every customer's *.mdf's and *.ldf's will always reside in
C:Program FilesMicrosoft SQL ServerMSSQLdata folder.
I've researched sp_attach_db, but this looks more appropriate for
moving databases. This isn't what I want to do.
Thank you in advance.
View 1 Replies
ADVERTISEMENT
May 23, 2006
I'm looking for best-practices advice. I have a flat file that gets produced by a mainframe system every day. I need to import this data into a relational database each night. I will use the data to do some numerical optimizations (generallly it will be a knapsack algorithm) for scheduling of machine and personnel resources. The file is a "rolling" list of our back orders. By rolling, I mean that when a job is complete it drops off the list and when a new order comes in it appears on the list. I have all the data I need to do the optimizations contained in the flat file. As you can imagine, the data in the flat file is highly redundant and not in any normal form. I've successfully made a package that imports the flat file from the mainframe. Now, I'm ready to make a package that parses the flat file and stores the information in it in a normalized form while taking account of the "rolling" nature of the data.
I need to do differing things. If I get a new order, I need to add it to the order file. If I get an order that has changed (which I can know by comparing a modified date in the flat file to a modified date in the order table), I need to modify the entries for that order. If an order for a product that isn't already on file shows up, I need to add the product to the product table. I also need to look at the orders that are already on file and see which ones are no longer in the nightly import so I can flag them as being closed. In any event, you get the drift.
My normal habit would be to write a program that does these things. I could certainly do that in this case and run it as an external process in my package. Given the tool SSIS is, is a special purpose program a reasonable approach to take within the context of SSIS. Or is there some better approach using some of the new tools that SSIS makes available. I'm certainly more comfortable writing a special purpose app but would prefer to take a best practices approach to this kind of problem.
Thanks in advance for advice.
View 3 Replies
View Related
Mar 19, 2008
I am converting a legacy ASP.NET 1.x site to an ASP.NET 2 one. In the former site, sets of data connections are stored in an web.config file allowing swapping between testing and production servers based on a global variable value at the start of the app. As I now try to use sqldatasource, I find the control has its connection string embedded in the html page and stored as a unique variable in the web.config. How can I dynamically swap the new sqldatasource's connectionstring the same way data connections are made in the former site? Please advise. Thanks.
View 6 Replies
View Related
Apr 5, 2001
I have a table that has a column for each month and I want to use a view to convert each row into 12 rows with a one month column. The month column will have 1 - 12 in it depending on the month. I need to convert it this way to push it into an Essbase cube. I know I can use the union operator to do this, but I would rather not read the table 12 times. Is there a way to do this just reading through the table once? My Hyperion Essbase book gives an Oracle example for swapping rows and columns by using a decode function. Is there a similar function in SQL Server?
View 2 Replies
View Related
Dec 21, 2007
Hi,
Is it possible on a matrix report for the rows and columns to be swapped around after the report has been built? E.g, can the rows and columns be swapped around by the user in the preview page?
thanks,
Al
View 6 Replies
View Related
May 16, 2008
I've got a field that might have spurious values in it (say, an admin adds a new row but doesn't have an entry for this field).
I'm trying to swap in the string no_image_EN.jpg if the value in the db does NOT end in .jpg. That way, any value rreturned is either a valid filename or no_image
I'm having trouble with the CASE statement, particularly testing just the last few cahracters of the string:
select product_code,
CASE can_image_en
?? When (can_image_en LIKE '%.jpg') then can_image_en
Else 'no_image_EN.jpg'
End as can_image_en,
none of these do the trick either (some are bad syntax obviously):
? When (can_image_en LIKE '%.jpg') then can_image_en
? When LIKE '.jpg' then can_image_en
? When '%.jpg' then can_image_en
? When right(can_image_en,4) = '%.jpg' then can_image_en This is the one that has correct syntax, though it seems to return false in ALL cases CASE can_image_en
When '%.jpg%' then can_image_en
Else 'no_image_EN.jpg'
View 5 Replies
View Related
Jul 3, 2015
As part of a migration of data to a new SAN I have hit a bit of a snag in the migration. In summary what will happen is user database data files will be moved from one LUN (say drive F:) to a new LUN (say drive G:). Once all the data is migrated, plan is to remove dependency of that drive from SQL server and remove the drive and delete the LUN. So far, so good.
However one of the LUNs (drive D:) destined to be deleted also hosts the instance default directories, i.e. everything under MSSQL11.MSSQLSERVER (Data, Backups, FTData, JOBS, etc). BOL has articles on how to migrate system databases, including tempdb. But there is no guidance that I could find on how to relocate other folders. There are forums where users have listed registry changes, etc that can achieve this but these are steps I am unwilling to take on a production server.
So my plan is:
1) Add new drive to cluster (drive E:), sufficiently large enough to host instance default folders
2) Shutdown SQL server
3) Copy all default folders to new drive
4) Swap drive letters so that new drive is now D:
5) Start SQL server and if everything works, delete the original drive (which is now drive E:).
View 3 Replies
View Related
Jul 20, 2005
All,To make a long story short, we are swapping out the "knock-off" drivesthat the NA purchased on E-Bay in one of our production SQL Servers(SQL Server 2000 Enterprise) this weekend for brand new ones (Compaq15K RPM 32GB drives). We are currently experiencing ASR almost on adaily basis and it is really causing a disription in service. SO, TheNetwork Admin ahs made this decison to replace these drives in attemptto solve this. These new drives will be imaged with ALL of the currentdata on the "knock-off' drives and will be plugged back in to thisdatabase server and brought back up .This Server also happens tocurrently be a subscriber in Merge Replication as well. Besidesstopping replication to this subscriber is there any other tasks thatI need to do or concerns that I need to be knowledgable about or lookfor when we bring this database server backup on line this weekend?Thanks Travis. :)
View 1 Replies
View Related
Feb 27, 2007
HI ,
Need some quick fix Help
I have been
trying to load data from AS400 to DB2 (windows) using ADO.NET connection in
Data reader source and OLEDB Destination (IBM Oledb provider )
The files, I€™m trying to load, have
number of rows more then 15 million.
On execution of the package I get
Out of Memory Error (see below)
My Destination Box is 4GB+ RAM and 4
CPU Box.
There seems to be some Buffer and
Swapping related issue which I€™m not able to figure out. It says that System is
unable to allocate memory
Please help me on the same.
Thanks in Advance
Amit S
SSIS package "ABCDE
1.dtsx" starting.
Information: 0x4004300A at ABCDE
2003 to 2004, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at ABCDE
2003 to 2004, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at ABCDE
2003 to 2004, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at ABCDE
2003 to 2004, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x4004300C at ABCDE
2003 to 2004, DTS.Pipeline: Execute phase is beginning.
Error: 0xC0202009 at ABCDE
2003 to 2004, OLE DB Destination [12]: An OLE DB error has occurred. Error
code: 0x8007000E.
An OLE DB record is available.
Source: "Microsoft Cursor Engine" Hresult: 0x8007000E Description:
"Out of memory.".
Error: 0xC0047022
at ABCDE 2003 to 2004, DTS.Pipeline: The ProcessInput method on component
"OLE DB Destination" (12) failed with error code 0xC0202009. The
identified component returned an error from the ProcessInput method. The error
is specific to the component, but the error is fatal and will cause the Data
Flow task to stop running.
Error: 0xC0047021 at ABCDE
2003 to 2004, DTS.Pipeline: Thread "WorkThread0" has exited with
error code 0xC0202009.
Error: 0xC02090F5
at ABCDE 2003 to 2004, DataReader Source [61]: The component "DataReader
Source" (61) was unable to process the data.
Error: 0xC0047038 at ABCDE
2003 to 2004, DTS.Pipeline: The PrimeOutput method on component
"DataReader Source" (61) returned error code 0xC02090F5. The
component returned a failure code when the pipeline engine called
PrimeOutput(). The meaning of the failure code is defined by the component, but
the error is fatal and the pipeline stopped executing.
Error: 0xC0047021 at ABCDE
2003 to 2004, DTS.Pipeline: Thread "SourceThread0" has exited with
error code 0xC0047038.
Information: 0x40043008 at ABCDE
2003 to 2004, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at ABCDE
2003 to 2004, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at ABCDE
2003 to 2004, DTS.Pipeline: "component "OLE DB Destination"
(12)" wrote 289188 rows.
Task failed: ABCDE 2003 to
2004
Warning: 0x80019002 at ABCDE
1: The Execution method succeeded, but the number of errors raised (6) reached
the maximum allowed (1); resulting in failure. This occurs when the number of
errors reaches the number specified in MaximumErrorCount. Change the
MaximumErrorCount or fix the errors.
Executing ExecutePackageTask:
C:Documents and SettingsAdministratorMy DocumentsVisual Studio
2005ProjectsIntegration Services Project1Integration Services Project1ABCDE
2.dtsx
Information: 0x4004300A at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x4004300C at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Execute phase is beginning.
Information:
0x4004800D at ABCDE 2005_04 to 2005_11, DTS.Pipeline: The buffer manager failed
a memory allocation call for 10484320 bytes, but was unable to swap out any
buffers to relieve memory pressure. 3 buffers were considered and 3 were
locked. Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many buffers are
locked.
Error: 0xC0047012
at ABCDE 2005_04 to 2005_11, DTS.Pipeline: A buffer failed while allocating
10484320 bytes.
Error: 0xC0047011
at ABCDE 2005_04 to 2005_11, DTS.Pipeline: The system reports 63 percent memory
load. There are 4294660096 bytes of physical memory with 1548783616 bytes free.
There are 2147352576 bytes of virtual memory with 227577856 bytes free. The
paging file has 6268805120 bytes with 3607072768 bytes free.
Error: 0xC02090F5 at ABCDE
2005_04 to 2005_11, DataReader Source [61]: The component "DataReader
Source" (61) was unable to process the data.
Error: 0xC0047038 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: The PrimeOutput method on component
"DataReader Source" (61) returned error code 0xC02090F5. The
component returned a failure code when the pipeline engine called
PrimeOutput(). The meaning of the failure code is defined by the component, but
the error is fatal and the pipeline stopped executing.
Error: 0xC0047021 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Thread "SourceThread0" has exited
with error code 0xC0047038.
Error: 0xC0047039 at ABCDE 2005_04
to 2005_11, DTS.Pipeline: Thread "WorkThread0" received a shutdown
signal and is terminating. The user requested a shutdown, or an error in
another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Thread "WorkThread0" has exited
with error code 0xC0047039.
Information: 0x40043008 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at ABCDE
2005_04 to 2005_11, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at ABCDE
2005_04 to 2005_11, DTS.Pipeline: "component "OLE DB
Destination" (12)" wrote 0 rows.
Task failed: ABCDE 2005_04 to
2005_11
Warning: 0x80019002 at ABCDE:
The Execution method succeeded, but the number of errors raised (7) reached the
maximum allowed (1); resulting in failure. This occurs when the number of
errors reaches the number specified in MaximumErrorCount. Change the
MaximumErrorCount or fix the errors.
Executing ExecutePackageTask:
C:Documents and SettingsAdministratorMy DocumentsVisual Studio
2005ProjectsIntegration Services Project1Integration Services Project1ABCDE
3.dtsx
Information: 0x4004300A at ABCDE
2005_11 to 2006_04, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at ABCDE
2005_11 to 2006_04, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at ABCDE
2005_11 to 2006_04, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at ABCDE
2005_11 to 2006_04, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at ABCDE
2005_11 to 2006_04, DTS.Pipeline: Pre-Execute phase is beginning.
€¦€¦.
€¦€¦€¦€¦
View 11 Replies
View Related
Apr 11, 2007
Hello - I have a SQL 2000 server which has a D: drive that contains all of my databases (system and user). I am running out of space on this volume and need to migrate the contents of this volume to a larger one. My initial plan was to introduce a new volume to the server (say a K: drive). Backup all databases (of course), and then stop all SQL services. Copy all data from D: to K:. Once data is copied, swap drive letter names (D: to I: and then K: to D. Then restart SQL services. SQL should not know any better since everything was on the D: drive when it went down, and everythiing is still on the D: drive when it came back up, correct?
The other option mentioned is to detatch the databases, copy the data and then reattach them in their new locations. I understand this method, but it seems more involved (and riskier) than just renaming the drives. Does anyone have an opinion regarding these two migration methods? Thanks for your help.
Chris
View 3 Replies
View Related
May 28, 2015
Is there a way to permanently change the order of the columns in Job Activity Monitor?
I'd like to move Duration to the right of Step Name, but this only lasts so long as I have JAM open. Once I close it and re-open, JAM goes back to its default column order. Google gives me nothing but the temporary "drag and drop" method that I already know about.
View 2 Replies
View Related
Nov 14, 2007
Hi,
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
cherriesh
View 4 Replies
View Related
Dec 3, 2007
Hello all,
I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation.
and it also logs
FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone.
As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
View 13 Replies
View Related
Mar 11, 2008
Dear list
Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task
D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log
the above dos cmd works 100%
However when I use the Execute Process Task I get this error
[Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles
User::gsPreplogInput = ex.log
User::gsPreplogOutput = out.log
Here are the task properties
RequireFullFileName = True
Executable = D:SSIS ProcessPrepweblogProcessLoadpreplog.exe
Arguments =
WorkingDirectory = D:SSIS ProcessPrepweblogProcessLoad
StandardInputVariable = User::gsPreplogInput
StandardOutputVariable = User::gsPreplogOutput
StandardErrorVariable =
FailTaskIfReturnCodeIsNotSuccessValue = True
SuccessValue = 0
TimeOut = 0
thanks in advance
Dave
View 1 Replies
View Related
Jan 30, 2007
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Task failed: Unzip download file
SSIS package "IngramWeeklyPOS.dtsx" finished: Success.
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
Task failed: Unzip download file
SSIS package "IngramWeeklyPOS.dtsx" finished: Success.
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
Thanks,
Monisha
View 1 Replies
View Related
Mar 20, 2008
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
WorkingDirectory : C:Program Files (x86)Microsoft SQL Server90COM
Executable : C:SQL_bat_FilesSQL5TC_CTIcustomer.bat
The customer.bat file will have the following code:
tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem:
The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
Can anyone help ?
View 9 Replies
View Related
Aug 20, 2014
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
View 3 Replies
View Related
Feb 14, 2007
Hi Folks,
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).
Thanks in advance
View 1 Replies
View Related
Jan 6, 2012
We are facing deadlock issue in our web application. The below message is coming:
> Session ID: pwdagc55bdps0q45q0j4ux55
> Location: xxx.xxx.xxx.xxx
> Error in: http://xxx.xxx.xxx.xxx:xxxx/Manhatta...Bar=&Mode=Edit
> Notes:
> Parameters:
> __EVENTTARGET:
> __EVENTARGUMENT:
[code].....
View 2 Replies
View Related
Feb 17, 2007
Hi,
I'm trying to upload the ASPNETDB.MDF file to a hosting server via FTP, and everytime when it was uploaded half way(40% or 50%)
I would get an error message saying:
"550 ASPNETDB.MDF: The process cannot access the file because it is being used by another process"
and then the upload failed.
I'm using SQL Express. Does anybody know what's the cause?
Thanks a lot
View 1 Replies
View Related
Nov 15, 2007
Hi. When I try to start a package manually clicking the Start Debugging button I get this after a little while:
Cannot process request because the process (3880) has exited. (Microsoft.DataTransformationServices.VsIntegration)
How can I prevent this from happening? This happens every time I want to start the package and
every time the process id is different. Here it is 3880.
Darek
View 13 Replies
View Related
Dec 20, 2006
select * from sysprocesses
How can I determine whether a process a system or user?
View 3 Replies
View Related
Oct 11, 2007
Hello,
I have had a full lock on my sql server and I have a few logs to found the origin of the lock.
I know the process at the head of the lock is the 55 process.
Here are the information I have on this process:
Spid 55 55
ecid 5 5
Ecid 0 0
ObjId 0 1784601646
IndId 0 0
Type DB PAG
Resource 1:1976242
Mode S IS
Status TransID GRANT GRANT
TransID 0 16980
TransUOW 00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
lastwaittype PAGEIOLATCH_SH
CMD AWAITING COMMAND
Physycal id 1059
Login time 2007-07-05 04:29:53.873
nat address DFF06EBF974D
Wait type 0x0046
HostName .
BlkBy .
DBName grpprddb
CPUTime 54331
DiskIO 1059
ProgramName
Would someone know a way to identify the origin of the process 55?
I have already tried to execute the following request:
select * from SYSOBJECTS
where id=1784601646
But I have had no returns.
Regards,
Renaud
View 3 Replies
View Related
Aug 24, 2006
I have a File System Task Copy file operation to copy a file in an SSIS package. The package when scheduled as a job fails with the following error:
The process cannot access the file 'C:ETLConsignmentAppleAppleRawFile.txt' because it is being used by another process.".
However when I right click on the package and execute it manually from the Integration Services it runs successfully without any problem. I am not certain on how to resolve this issue any inputs will be much appreciated.
Thanks,
Monisha
View 19 Replies
View Related
Feb 6, 2008
Error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
When running two File System Tasks after each other, with the same file, the file is still locked when running the second task. Resulting in an error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
I found a workaround by addind a Execute Process Task before the second File System Task that pings to the localhost. This results in a 5 second delay, but there must be a better solution. Anyone?
View 9 Replies
View Related
Nov 23, 2006
While configuring log shipping using SQL 2000 Ent, the copy process failed with this message :
The process cannot access the file because it is being used by another process.
does anyone know what cause this ?
SAP R/3 is configured with the system, could this be the problem ?
View 3 Replies
View Related
Apr 7, 2008
i'm going nuts with SQL server notification thing. I have gone throigh this artical which tells how to set user http://www.codeproject.com/KB/database/SqlDependencyPermissions.aspx. This article show how to create new user and setup for sql server notification.But In my case user was alredy existing in database. which is very common senario in most cases. So i did following( check the SQL script below) but then i get this error
"A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)"
this my sql script
use [master]Go
-- Ensuring that Service Broker is enabled ALTER DATABASE [DatabaseName] SET ENABLE_BROKERGO
-- Switching to our databaseuse [DatabaseName]GO
CREATE SCHEMA schemaname AUTHORIZATION usernameGO
ALTER USER username WITH DEFAULT_SCHEMA = schemaname GO
/* * Creating two new roles. We're not going to set the necessary permissions * on the user-accounts, but we're going to set them on these two new roles. * At the end of this script, we're simply going to make our two users * members of these roles. */EXEC sp_addrole 'sql_dependency_subscriber' EXEC sp_addrole 'sql_dependency_starter'
-- Permissions needed for [sql_dependency_starter]GRANT CREATE PROCEDURE to [sql_dependency_starter] GRANT CREATE QUEUE to [sql_dependency_starter]GRANT CREATE SERVICE to [sql_dependency_starter]GRANT REFERENCES on CONTRACT::[http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification] to [sql_dependency_starter] GRANT VIEW DEFINITION TO [sql_dependency_starter]
-- Permissions needed for [sql_dependency_subscriber] GRANT SELECT to [sql_dependency_subscriber] GRANT SUBSCRIBE QUERY NOTIFICATIONS TO [sql_dependency_subscriber] GRANT RECEIVE ON QueryNotificationErrorsQueue TO [sql_dependency_subscriber] GRANT REFERENCES on CONTRACT::[http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification] to [sql_dependency_subscriber]
-- Making sure that my users are member of the correct role.EXEC sp_addrolemember 'sql_dependency_starter', 'username'EXEC sp_addrolemember 'sql_dependency_subscriber', 'username'
View 10 Replies
View Related
Mar 19, 2008
Hi I am using VWD 2008, SQL Express 2005, Reporting Services, Win-XP, IIS5Basically let's say I have 2 pages:Page1: has a SQLDataSource control that populates a GridView from a table from a database file myDB.mdf (no code behind)Page2: has a reportviewer control that show a report with data from the same table from myDB.mdf from the reportserver, (no code behind)I have attached myDB.mdf to the SQL Server Express using the SQL Server Management Studio Express.If I first open Page2 to display the ReportViewer it works ok. or using the Report ManagerNow this is the problem:If after that I try to open Page1 then a get an error message:Cannot open user default database. Login failed.Login failed for user 'myServerASPNET'. Exception Details: System.Data.SqlClient.SqlException: Cannot open user default database. Login failed.Login failed for user 'myServerASPNET'.Then I have to restart the SQL Server to fix it,Now I can open Page1 ok, but if after this I try to open Page2 (ReportViewer) againThen I get this error:" An error has occurred during report processing. o Cannot create a connection to data source 'my_Datasource'. &And this error if open the report using the report manager:" An error has occurred during report processing. o Cannot create a connection to data source 'my_Datasource'. § Unable to open the physical file "C:InetpubwwwrootWebsiteApp_DatamyDB.mdf". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". &Now if i check the Management Studio Express again, you can see that myDB.mdf was detached. It seems to be there by it has no Tables or definitions, so I have to attach it again..Do you know how to fix this?Thanks in advance,Ed
View 5 Replies
View Related
Sep 21, 2007
Hello
Which is better and faster?? and WHY????Writing Select Statement with joins in Stored procedure, or creating view and calling it from stored procedure (select * from view)..
View 4 Replies
View Related
Jan 13, 2004
Hi
Dont laugh but...
I am not entirely ignorant to web technologies, and best practices but i am having a bit of a planning dylema.
My company has a well established SQL 2000 database with windows application which has been created by myself, what i am planning on doing is creating a web site, using asp.net and publishing some of the information, so that our clients may use it, and stop pestering us on the phone. what i would like to know is what would be the best way forward, obviously i don't want to show them all our information, and don't want to put 5Gb worth of data onto a ISP website. What would you suggest i do?
Thanks in advance
Brad
View 1 Replies
View Related
Jun 8, 2004
DECLARE @returnDay int
SELECT @returnDay = DatePart(day,GetDate())
If @returnDay = 8
BEGIN
select * from Hospitals left join Units ON Units.HospitalID = Hospitals.HospitalID where Units.HospitalID is null
RETURN
END
this is just a part of the procedure I am trying to create, I am getting hospitals that haven't submitted any data and wish to send them an email.
on the other hand I have two tables that have all the data for emailing to hospitals but are not linked to tables giving the list of hospitals
I have been advised to create a cursor(easier said then done) that will go through my list of records that need to receive an email
nothing going very well with that at the moment.
so I was hoping to see if somebody has any other suggestions for me.....
View 1 Replies
View Related
Apr 19, 2004
Hi,
I am trying to solve this procedure.
Let me try to explain it...I am getting DEGREEID from one of the SELECT query . I want to OUTPUT (ie , COUNT) from procedure,the number of departments with the degreeid, got from the above query.
With below procedure, Since an employee can have multiple DEGREEID , the cursor is giving OUPTUT ie, COUNT for the LAST Degreeid. Eventhough the previous DEGREEID dont have any DEPARTMENT...but only for the LAST DEGREEID...!
How can I solve this..... whether I can solve this with CURSOR or I have to use someother way...Please advice me !
DATA
-------
DEGREE_EARNED
---------------------
EMPID DEGREEID
------ ------------
201 12
201 3
201 250
202 3
202 10
203 17
DEPARTMENT
---------------
DEPID DEGREEID
------ ------------
10 1 12
111 250
111 12
121 3
121 12
121 250
------------------------------------
--------------------------------------------------------------------
DECLARE @vchid int
DECLARE testcursor CURSOR FOR
SELECT degree_id
FROM degree_earned WHERE emp_id= @empid
OPEN testcursor
FETCH NEXT FROM testcursor INTO @vchid
WHILE (@@FETCH_STATUS <> -1)
BEGIN
Select @outresult = COUNT(*)
from
department
where degree_id = @vchid
FETCH NEXT FROM testcursor INTO @vchid
END
--------------------------------------------------------------------
View 4 Replies
View Related
Mar 15, 2006
I tried to run a SQL script and i get the following message
quote:Error Message = Native SQL Error Code
[Microsoft][ODBC SQL Server Driver][SQL Server]Line 1: Incorrect syntax near ''.
SELECT name, id, description FROM products WHERE id=4INSERT INTO ''admin_login'' (''login_id'', ''login_name'', ''password'', ''details'') VALUES (252,''neo2'',''newpas5'',''NA'')--
Error Code : 350
F:InetpubParexeldefault.ihtml
iSQL dbname="Parexel" ALIAS="prod" SQL="SELECT name, id, description FROM products WHERE id=4INSERT INTO 'admin_login' ('login_id', 'login_name', 'password', 'details') VALUES (252,'neo2','newpas5','NA')--"
[Microsoft][ODBC SQL Server Driver][SQL Server]Line 1: Incorrect syntax near ''.SELECT name, id, description FROM products WHERE id=4INSERT INTO ''admin_login'' (''login_id'', ''login_name'', ''password'', ''details'') VALUES (252,''neo2'',''newpas5'',''NA'')--
Im new to SQL and i dont know what im doing
Also what does this mean
quote:iSQL dbname="admin" ALIAS="prod" SQL="SELECT name, id, description
What do SQL databases do and what would i need to do to enter data in to SQL such as a email address or cant that be done?
I think dbname is the database name ALIAS is that the table name?
im just trying to enter data in to my database database and i have no clue what im doing.
Iv read guides online but i get the above error 350
D
View 6 Replies
View Related