I am having no end of trouble with transactions in the package which i am building. I now just want to go back to basics and see if someone can tell me where i should set specific transaction options.
Firstly, my package runs a for each loop which loops through a directory of directories. In each of the sub directories there are 2 files. The first steps in the loop are to check if a folder has been processed previously, if so then it moves it to a specified directory. The reason that this is done first is that i cannot move the directory whilst it is being read in the foreach loop, so i pass the path to the next iteration of the loop. There is another file system move directory task outwith the foreach loop to deal with the last directory.
Once this has been done, i parse the file name of the xls file within the directory to get a serial number which is assigned to a variable.
The next step is where i envisage that the transactions should be implemented:
I have a sequence container which contains 2 data flow tasks to be run in parallel. each of these reads data from a seperate work sheet in the xls file. and writes it to a database table. Each dataflow task consists of an excel source task, derived column task, look up task (used to derive an ID from the serial number stored in the variable), and an oledb destination.
Upon completion, if the sequence container fails i want to set the destination folder path to the qurantine location. If it succeeds i want to copy the csv file contained in the same directory to a seperate location and then set the out put folder to the archive location.
What i need to know is where do i set the transaction option if i want to roll back the data that has been inserted into the database if either data flow task fails?
Please somebody help, as this is not working at all.
Sometimes when I do "alter database ABCD set partner failover" I get the following message: Nonqualified transactions are being rolled back. Estimated rollback completion: 100%.
In 99 percent of the cases after such message the first attempt to use an open connection would also raise an error such as "Exception: A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)"
After the first error all subsequent queries would run perfectly.
Goofed up and ran an update query. It messed up all the data in a single table. I'm trying not to restore the table from a previous backup since the backup is more than 20 GB. It's going to take forever to restore it. Any advice would be much appreciated!
Hi, I am used to using Access but I need to learn SQL for larger web sites. I normally build the database in Access visually, but how do you do this with SQL? I just installed SQL developer and I can't find the area that builds the database.
Hi, I m new to sql server... I want to know the difference b/w unicode class and character class.. Pls help... That is difference b/w varchar and nvarchar
I've a number of SQL DBs that are managed by a php browser front end. These were developed in php/html to a SQL 2000 DB.
The DB and tables have been recreated in SQL 2005, permissions are that of the owner (SA) and the php code has been adjusted accordingly. Although the reading of the data is okay, I cannot update nor insert data from the php script through to the SQL 2005 DB whereas it works perfectly fine through the same php script and SQL 2000.
Obviously, there's something I'm not seeing??? There are no noticable messages and the last transaction does show up in the php display.... closing the portal then reopening, the record is "gone" and the SQL 2005 DB remains untouched.
Hi, I am a DB2 DBA and plan to learn SQL Server for professional reasons. But I am not able to find the material to begin with. So I would be greatful if you guys can send me links to start with the architecture and other basics of SQL server. I just want to accquire additional skills so that it can help me in the future.
I am not sure is this the right forum to get solution to my question.
From where i can get sysbase basics. I am posting this query in this forum coz this forum and gurus in this forum had helped me in lot critical situtions.
I've recently gotten into the administration side of SQL and am muddling my way through it as best I can. We have a DB server with three or four databases on it. Maintenance routines do a full backup of each one nightly and weekly and put all sets into one backup (.bak) file.
I've never had to restore them, but I thought I'd put myself through a test run since this is unexplored territory for me and I would prefer not to have to learn in an emergency.
A scenario I can foresee is where I'd have to restore just one table, perhaps to a new location (i.e. not overwrite the live db, just bring up a copy of part of it elsewhere so I can do a data dump from some old data). But when I go to 'restore database' or 'restore file/filegroups' and select the file, it seems to really want to restore the full backup directly over top of the existing database. If I try to restore it to a new, non-existent database, I get an error about how 'logical file master is not a part of database 'new-backup'), and if I try to restore it to a new db that I've created, I get an error about how the databases don't match.
It seems to me that my problem is a philosophical one - perhaps I don't understand the nature of backups, but I feel like what I'm trying to do is pretty simple. I want to take the full backup of database A and restore it, in part (or in full if I have to) to new database B. What do I need to do to accomplish this?
Thanks for indulging me - this must be brain-dead stuff for most of you!
Hi there, I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following: <code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction? Any help is greatly appreciated
Hey everyone, sorry for the lame questions, but: 1) I have a transactional replication service set up. However, whenever we select a new table or column within a table to replication to the subscribers, the changes never seem to get picked up unless we make an actual change within some of the other data. Any idea why this is? Even after I've reinitialized snapshots, the new changes don't want to proprogate to teh subscribers until data is actually changed. 2) where do I need to go to manually stop and restart a a distribution agent to force these changes to be proprogated without having to change any data within the tables? Thanks
I've setup my stored procedure, but now what? I want to display the information (only one record it will return) on the page / NOT using a datagrid or list, just simple.....I guess using an asp:label????? Could someone tell me what I need to do!!!
Also, are there any proper resources out there that just show you right out of the box howto insert / update / delete with sp's?, show / view, just proper simple source code?
THANKS!
my code: // Creating the database connection SqlConnection objConn = new SqlConnection(System.Configuration.ConfigurationSettings.AppSettings["MM_CONNECTION_STRING_conn_main"]);
// Creating the stored procedure 4 the webpage // Instantiate a Sql Command SqlCommand webpage_view = new SqlCommand("stpr_webpage_view", objConn); // Declare that the T-SQL statement that will be executed is a stored procedure // Use a CommandType.Text to execute a T-SQL Query webpage_view.CommandType = CommandType.StoredProcedure; SqlParameter webpage_id_parm = webpage_view.Parameters.Add("@webpage_id", SqlDbType.Int); webpage_id_parm.Value = "1"; // Create a SqlDataAdapter to manage the connection and retrieve the data SqlDataAdapter da_1 = new SqlDataAdapter(); // Instantiate a Dataset that will hold the data retrieved from the database using the select command DataSet ds_webpage = new DataSet(); da_1.SelectCommand = webpage_view; // Retrieve the data into the dataset using the SqlDataAdapter.Fill command da_1.Fill( ds_webpage, "webpage_title" ); DataView dv2 = ds_webpage.Tables[0].DefaultView; if ( !this.IsPostBack ) { this.DataBind ( ); }
Im trying to justify to people in a meeting why we have to break down some large financial instrument tables in a new database design we are implementing.
They have currently broken the tables down into instruments1 & instruments2, containing approx 300 & 150 columns respectively (further columns will be added!). then they have a view that joins virtually all of these columns. I've asked for the tables to be broken down into more logical groups. they are saying that this will cost more overhead as their views will span more tables. i also complained about the width of the rows, but they claimed that sql 2005 will not have a problem as only 1/3 of the row will have any data at one point so will not cause an overhead. can someone give me some definate points on this relating to 2005 as I am fairly new to it.
can someone help me with the basic points why they should be split into more logical groups.
they are also intending on adding millions of records a year to this table and partioning it will different filegroups. Should you place the indexes for tables on different filegroups as well?
Is the intent of Microsoft to enforce a vertical presentation of the SSIS Control Flow, Data Flow etc? I like to work down and to the right and I am having problems with the connectors (precedent) or in DTS 2000 world Success or On Success implied. Is there away to connect from the lower center anchor point to the side of the next object in the flow? For some reason, I can't get a similiar look to DTS 2000 environment (going from the bottom middle of the object to the side anchor pint (small box) symbol to the left. It wants to anchor on the bottom right, left, or middle, if going horizontally. I must be missing something, here??
I'm a bit new to administering a SQL Server and this seems like a pretty basic question, but I'm not sure how to phrase it for the searches. So I apologize for seeking an indulgence...
I have SQL Server 2005 Standard edition running on a server exposed to the Internet. A handful of clients have to connect to it via TCP/IP using SQL Server Studio and ODBC links. But my windows logs are chok-full of failure audits of what I presume to be your garden variety crackers trying default passwords -- several times a minute.
What's the best solution to this, and how would I go about implementing it? Restrict TCP/IP access to certain IP ranges? Is there a 'max login attempts' somewhere? The server uses SQL authentication (not windows) if that makes a difference.
We have an large SQL database being accessed by a C# application. Each hit on the database opens a connection then closes it when done. We are currently experiencing errors when more than 20 users access the database at one time. One of the things I am wondering is if we are using connection pooling. From BOL I am lead to believe connection pooling is off by default. Can I turn on connection pooling as a default in SQL Server or do I pass 'pooling=true' as a parameter on each connection string?
I'm trying to do something very basic here but I'm totally new to MS SQL Server Manager and MS SQL in general. I'm using the Database Designer in MS SQL Server Manager. I created two tables with the following properties: Schedule (ScheduleID as UniqueIdentifier PK, Time as dateTime) Course (CourseID as UniqueIdentifier PK, Name as VarChar(50)) I create a relationship by dragging the PK from the first table over to the second and I link on ScheduleID to CourseID columns (I'm not certain what type of relationship is created here N:N?). It appears to work, I can do a Select * and join the two tables to get a joined query. The problem starts when I try to populate the tables: a course will have a schedule. I can't seem to get the rows to populate across both tables. I try to select the pk from the first table and insert it into the second but it complians about it not being a uniqueidentifier. I know this is very basic but I can't seem to find this very basic tutorial anywhere. I come from the Oracle world of doing DB's so if you have some examples that relate across that would be great or better yet if you can point me to a good reference for doing M$ DB stuff that would be great. Thanks.
I have 3 'separate' but related processes that I am developing in SSIS. Each process shares a few global variables and connection objects. None of the processes depend on inputs or outputs from the other processes. However, there is a sequence order in which they are exectuted.
Should I develop these three process in the same package or 3 separate packages in the same solution?
From the point of view of sharing the same variables and connection objects, maybe just one package? If so, would it be advise to use the toolkit's 'Sequence Container' to ensure the order in which the 3 processes are run? I noticed that 'global variables' are limited to the scope of individual packages so it would appear that a variable can't be shared between packages in the same solution? I was also thinking that in terms of configuration settings, there might be an advantage of keeping everything in the same package. The only disadvantage I can think of in relation to keeping everything in the same package is that the design layout might look a bit cluttered.
I'm tending to think one package here but would welcome comments and suggestions from those that have experience of similar design scenerios.
Hello,I'm trying to create a simple back up in the SQL Maintenance Plan that willmake a single back up copy of all database every night at 10 pm. I'd likethe previous nights file to be overwritten, so there will be only a singleback up file for each database (tape back up runs every night, so each daysback up will be saved on tape).Every night the maintenance plan makes a back up of all the databases to anew file with a datetime stamp, meaning the previous nights file stillexists. Even when I check "Remove files older than 22 hours" the previousnights file still exists. Is there any way to create a back up file withoutthe date time stamp so it overwrites the previous nights file?Thanks!Rick
I am writing a program where I will be calculating some values that need to be sent to an SQL Database row by row. I am new to SQL Database programming and I was wondering what I need to include in my program as far as declarations, and data adapter connections in order to accomplish this. Or is it enough to already be connected to my database via a manual data connection? I also need to know some of the basic commands for inserting a new row, and adding data under existing columns, prior to moving on to the next row and repeating the process. If you can point me to a basic example that demonstrates this, including how to properly close out my database, I would appreciate it. If you know of a good VB2008 SQL Database book that covers all the basic & some advanced SQL Database programming, I would be interested in that as well. For Example: Row# Name CalcA CalcB CalcC CalcD 1 ? ? ? ? ? 2 ? ? ? ? ? Etc€¦ Thanks. Techno Visual Studio Express 2008 user
New to Database Mirroring and I have a question about the Principal database server. I have a Database Mirroring setup configured for High-safety with automatic fail over mode using a witness.
When a fail over occurs because of a lost of communication between the principal and mirror, the mirror server takes on the roll of Principal. When communication is returned to the Principal server, at some point does the database that was the previous Principal database automatically go back to being the Principal server?
I need to run two reports each of A5 Size to run back to page and print on single A4 paper means in 1st half Sale bill will be printed and in second half Gate Pass Will Be Printed both report will be on same page and size and shape should be maintained. How to do it.
Hello,I am hoping you can help me with the following problem; I need to process the following steps every couple of hours in order to keep our Sql 2000 database a small as possible (the transaction log is 5x bigger than the db).1.back-up the entire database2.truncate the log3.shrink the log4.back-up once again.As you may have determined, I am relatively new to managing a sql server database and while I have found multiple articles online about the topics I need to accomplish, I cannot find any actual examples that explain where I input the coded used to accomplish the above-mentioned steps. I do understand the theory behind the steps I just do not know how to accomplish them!If you know of a well-documented tutorial, please point me in the right direction.Regards.
Does anybody know of a way to rollback SQL Server 2005 databases back to SQL Server 2000? Is there a way of doing it without resorting to Copy Database Wizard? I love to find a way of attaching a SS 2005 database to a SS 2000 instance without any issues.
I recently upgraded to SS 2005 and I am very unhappy with the SS 2005 and I want to rollback to SS 2000, which was a lot more stable. I am having several major issues that are affecting my whole company's day-to-day operations and the managers are not happy. Some of the issues include night time batch running very sluggish for no apparent reason. This is a biggest problem because it only occurs once or so a week and causes a disturbance with the daily activities when the night time processing isn€™t completed on time. The rest of the time, the batch processing runs great, even a little better then on SS 2000. I don't believe it is a matter of my application needing to be retuned because if that was the case, then why isn't it running sluggish every night? Also, it's never the same day that the sluggish behavior occurs. If it was occurring on the same night, then I would have something to investigate within our application, but it doesn't. Another issue that I am having involves a night time job that restores a copy of the production database to the Data Warehouse server to be used for updating the data warehouse. Again, most of the time it runs great (~2 1/2 hours), but once or twice a week, it goes stupid and takes 6 1/2 hours for no apparent reason. Again, it is not happening the same day either, which could give me something to invesigate. On SS 2000, this same job ran flawlessly. Never I did I run into situation that the database restoration took that long to run. Even another issue involves a SQL Server Agent Job that was put into suspended state. What's a suspended state and how can I get it out of suspended state? I can find no information about suspended state in BOL. I did a Google and nothing came up. If this suspended state was put in for security reasons, great, but then tell me how I can remove the suspended state. I am also not happy with the fact that I can't get accurate information about the queries that are actively running at that particular moment. In SS 2000, when I noticed high CPU usage on the server, I would run the sp_who2 active stored proc and it would show me all the active thread and how much CPU it was consuming. I would then find the running threads with the highest CPU numbers and investigate the query and see if we could improve it. Now in SS 2005, I get in the same situation and run the sp_who2 stored proc, and there is no smoking gun. All of the active threads are showing very little CPU usage, which I am very suspect of. What the heck happen to sp_who2? I looked at some of the other ways of looking at running processes (i.e... sys.sysprocesses) and they don't appear to be giving the information that I need.
I am very unhappy and I just want to roll back to SS 2000 and wait a couple of years before I upgrade to SS 2005.
Hi All, Can anybody suggest me a website where I can find articles on Managing transactions with Sql server. Also a scenario where the transactions take place in a environment involving 2 different databases, Like the bank account and credit card transactions (specifically of 2 way kind) Thanks
I have a web application with a shopping cart, how do I stop all the shopping cart transaction from going into the db log? Is this possible? These are are only transient data movements, and will never be need to to restore to, and they are cause log bloat. Or is there a better way to stop log bloat?
How can we change connection properties in a DTS pkg with connection? You can loop through the connection count but the connection ID is not static one.So can’t rely on that. Is there another way of changing connection properties?
I am currently designing a DTS Package to import data that is processed daily into a large database.
I have to design the package such that if any step fails when importing, I roll back the entire transaction.
I have designed the package with this in mind, checked "join transaction if present" and "rollback transaction on failure" in all of the workflows. I have also made all workflows serialized.
However, when I run the package, it fails on one of the data pumps with the error: