My application uses sql for performing operation. something like conncetion.execute(query). so there is only conncetion object no recordset object or something like that .
i want to run multiple instence of my application so i want to maintain integrety of data. and i am looking for solution through sql for locking mechanism. so concurrent data access dont currept data.
I am using SQL Server 2012. I Want To Maintain all Type Logs In Particulars database or server. I want to track all Query Which Execute in Particulars Database. and all other activity?
Is there any way to maintain audit trail of access to my SQL server 2000 database by any user ?? I need to log the timestamp of any insert/update/delete to any record in a table within the database by the user.
We have a Silverlight based application which currently supports only one production version. Idea is to support three concurrent versions of the same application and user will switch to the newer versions based on their interest or they can still continue with the older version.
We still have to use the existing database for all these three versions.
What is the best way to architect this so that we can differentiate the code between the versions and still keep the data in sync and run all the versions in parallel.
I'm working with a third-party database (SQL Server 2005) and the problem here is the following:
- There are a bunch of ETL processes that needs to insert rows on a table (let's call this table T) and at the same time, an ERP (owner of T) is up and running (reading, updating and inserting on T).
- The PK of T is an Integer.
Today all ETL processes uses (select max(ID) + 1 from T) to insert new rows, so just picture the scenario. It is a mess! Everyday they get duplicate key error when 2 or more concurrent processes are trying to insert a row (with the max) at the same time.
Considering that I can't change the PK, what is the best approach to solve this problem?
To sum up:
* I need to have processes in parallel inserting on T
hi expertsss.. my msdb database is like 2gb big.. to me is really big.. so is there a way to maintain that? and how. .. also.. my disk level fragment are bad on one of my drive (some datafiles in there and msdb is there too). is there any 3rd party tool i can use to do the defragment and set schedule ? please help thanks~
Hi All, Pandon me for asking such question, I am still a beginner to ASP.NET. I have a project that require me to do single operation that is suppose to update two databases, wonder how do I maintain transaction between these two databases? Please advise, thank you!
Sebastian Garibaldi writes "Hi I'm Sebastian from Argentina, and i have a problem with a SQL data base. I receive error from data base of broken index and consistency errors. I set the fill factor with the information from the books online, i put 70 in tables that have a lot of INSERT/UPDATE/DELETE but it works for two o tree days.
I now that i must make some maintain in the database but which tools i shuld use?
here i paste an error from dbcc checktable:
Server: Msg 8964, Level 16, State 1, Line 1 Table error: Object ID 981108969. The text, ntext, or image node at page (1:949979), slot 52, text ID 57535781339136 is not referenced. Server: Msg 8964, Level 16, State 1, Line 1 Table error: Object ID 981108969. The text, ntext, or image node at page (1:949979), slot 53, text ID 57535782191104 is not referenced. DBCC results for 'FCRMVI'. There are 108460 rows in 17430 pages for object 'FCRMVI'. CHECKTABLE found 0 allocation errors and 2 consistency errors in table 'FCRMVI' (object ID 981108969). repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE (ArleiProd.dbo.FCRMVI ).
Can anyone give me an idea like, what percentage of organizations use 'code' to maintain the parent-child relations on their tables than having FK constraints thru the db model? Because,all the companies that I worked with used 'code' to control the relationships across the tables(not the PK/FKs.!!) Thanks. Neil.
I have to synchronize 2 databases hourly but am having difficulty maintaining foreign key relations. These tables use auto-increment columns as primary keys, with child records in other tables related with foreign keys. I can't change the way the local software uses primary or foreign keys as it is hardcoded in the local app. (microsoft retail management system)..(however the web-remote app is easily customized). I am using CDB synchronizer to sync the two databases because the remote one is mysql.
Example tables layout: Items table has auto-increment primary key 'id' TransactionEntry table has its own auto-increment primary key 'id' and a foreign key 'item_id'
Example of how remote and local database foreign key relations are incorrect after sync using CDB synchronizer: 8:00am -first installation of database-'item' tables auto-increment 'id' columns match with id last record value of '6'
locally the following products are added:
11001 short sleeve t---gets added with primary key in 'item' table 'id' of '7'
11002 long sleeve t----gets added with primary key in 'item' table 'id' '8'
remotely the following products are added:
21001 hipster jeans- --gets added with primary key in 'item' table 'id' of '7'
31001 overalls---gets added with primary key in 'item' table 'id' '8'
remotely someone orders 21001..so TransactionEntry table records sale of "item_id" of '7', but after synch with our local server,
product with "item_id" of '7' is "short sleeve t".
9:00 -synch takes place...item_id foreign key isn't accurate because of independent auto-increment values..
whenever a product is ordered, the TransactionEntry table will record the product's ID column thats available in it's own local copy... after synch, the 'item_id' field will not match the 'Item' table id field and the data about the transaction's product is lost.
I have read of solutions involving staging/temporary tables to cascade update foreign keys before synching into main database, but hopefully there is a more elegant solution for this. If this is only way, will it be reliable? foreign key mix-match seems like could cause havoc.
I would like to maintain version control of the all the sql objects (sp, view , tables ) and maintain source code versions control. Any way to use TFS to maintain versions of sql objects. Also the folder structure when using TFS.
Hey everyone, I'm new to .NET and I've recently inheirited a rather large and busy asp.net website. I was asked to add a testimonials section on each page that will randomly pull a testimonial out of the db. This is fine, however, I'm getting random errors about the DB connection either being closed or connecting. Here is the code for the testimonials class: 1 public SqlDataReader GetTestimonials(ref SqlDataReader reader, int iCatID, string sLanguageType)2 {3 SqlCommand cmd = new SqlCommand("sp_DVX_Testimonials_Fetch", Connection);4 cmd.CommandType = CommandType.StoredProcedure;5 6 cmd.Parameters.Add(new SqlParameter("@Cat_ID", iCatID));7 cmd.Parameters.Add(new SqlParameter("@LanguageType", sLanguageType));8 9 reader = cmd.ExecuteReader();10 11 return reader;12 } I know this isn't the best way to do this (especially for each page[this site averages about 1000 hits a day]), so I was wondering was--is there a way to maintain a single DB connection that's set up in the Application_Start that will maintain the connection so I don't have to worry about this error. If not, does anyone and any ideas as to what would help? Thanks in advance!
Hi guys, We have a scenario where there are about 50 tables in our database and we want to build an intranet web application for users to with the office to access those tables. Users ability to access tables falls into diferent category:
Some users can NOT view some tables at all Some users can ONLY view some tables but not insert/update any field Some users can view and also insert/update some tables (in the same time they might not have view(select) permision on some other tables) Now, what is the right way to implement this. I say we have to have a Role, RolePermission, User, UserPermission inside our database to implement this (something which would look like the Roles and Users inside MSSQL) and we only have one user for our Database (MachineName/ASPUSER) to access the database and all the tables within My colleague says NO, instead of creating all these tables and implement this, we add every user of our application as a Database user inside MSSQL in the Databse Users. All the web application I have seen so far, DNN, CommunityServer, ... the have tables to implement all these and they don't add users inside the MSSQL. Now which way is the way to go with, and what problem might we fall into if we use SQL users, is this possible at all. How can I convince him that we have to make and use our own tables to manage this. Thanks for any help,Mehdi
Say I have a result set with two fields numbers and letters.
1 A3 A1 B2 B
The result set is ordered by the letters column. How can I select the distinct numbers from the result set but maintain the current order? When I tryselect distinct Number from MyResultSet
it will reorder the new result set by the Number field and return
123
However, I'd like maintain the Letter order and return
I would appreciate If any one could help me in this matter.
problem is : how to maintain perpetual inventory transaction table order in batch mode updation ?.
I have designed a table to hold all inventory transactions. The table order is perfectly maintained in online system of updation. But if I go with batch updation then the order of the transaction is collapsing. For example consider the following table design. (note I used auto number to maintain the order).
version used : SQL Server 2000 with service pack updates.
ofcourse if the order collapse means costing can not be accurate. so please any one could help me to solve this problem. because many software packages are not posting in sequence if we choose in batch mode.
I have a new cluster (2 sync, 2 async) with about 50 databases going from 1 to 200gb ( all of the objects are compressed).That at sql server 2012, sp1 CU7.I have several drives for logs with 200gb of space in there...I am having issues at rebuilding indexes on this env, ie, I have a table with the clustered index heavily fragmented (~80%), and the table has about 60gb of data, uncompressed that should be about 160gb.
The index rebuild is creating a log file big enough as to consume all the space that I have for logs, and that is only 1 table, so for sure my old process to maintain indexes (ola.hallengren code) won't work on this scenario.
We are setting up the clients' database with transactional replication that allows the subscriber to send updates by pull subscriptions.
The plan is to create a second database so the users can keep working on the original database while we get the replication set up and working. Once we feel confident that the replication is working on the second database, the plan was to backup the old one, then detach it, and then restore the new database with the latest data from the old one. My question is whether this would preserve all the replication configurations that we set up? Since the replication adds a column to each table what would happen to that column? Or what is an alternative, which would allow us to set up the replication without disturbing the user's work, and then implement the replication with the latest data?
I am also wondering how to set the synchronization to happen once daily? I do not see where I can set that. I only see options for continuous vs on demand. Does on demand mean I can somehow schedule with an external program to run once a day?
Thanks for your patience, and I hope my questions make sense.
I am trying to encode the barcode on the reporting service using SQL server 2005, they told me it is possible to maintain the barcode on exported excel sheet, however, I did not find the barcode on it, only squre black image.
I have transactional replication, The publisher DB contains table call Courser with timestamp column, this column values are unique for the publisher DB
The subscriber DB also contains same copy of data in publisher DB Course table, but the timestamp column values are different.
So my problem is how can I keep this two tables (Course) identically, (same timestamp column vales in both table)
NOTE: Publisher and Subscriber DB reside under two different SQL server instance
I am working in ASP.NET 2.0 and using sql server 2000 as backend . In my application i need to insert/update to oracle database table lying on different server. Please let me know how can i maintain two different connecttions to different databases lying on different servers.....
I am working in ASP.NET 2.0 and using sql server 2000 as backend . In my application i need to insert/update to oracle database table lying on different server. Please let me know how can i maintain two different connecttions to different databases lying on different servers.....
I am going to use the backup and restore function to copy data from one server to the other server. We would like to keep the servers in sink at this point (not instantaneously but update the server say once a day) and I would like to do this by using the back up transaction logs. I have tried to back up from individual transaction logs but in also seems to restore the full database also. The database is roughly 6 gig and transaction logs are about 25- 50 meg. I really do not want to have to restore the database every time.
I know I could set up replication but this has been more of a pain administering this on a daily basis. I would like to do a schedule and forget type of thing. This is going to be done on 6.5.
I am trying to export the result of a select into a .csv file using SQL Server 2000 DTS. The data for varchar fields has leading zeroes in the database, which is very much required in the csv file.
But, the .csv file trims the leading zeroes. How do we force to maintain the same data as in source?
I had used Text File Destination Connection as the destination, with the below options File Extension: .csv File Format: Delimited File Type: ANSI Text Qualifier: Double Quotes ("") Row Delimiter: {CR}{LF} Column Delimiter: comma
Source Data: 0123 Target Data (Requirement): 0123
The data in .csv: 123 (This is the issue)
When I open this file in a Text Editor, I do see the data in double quotes..."0123".
I have two production servers in the same data center. I need to ensure that database remains available if a catastrophic server failure or a disk failure occurs. I need to maintain transactional consistency of the data across both servers. I need to achieve these goals without manual intervention. I was suggested to use this optionTwo servers configured on the same subnet. SQL Server Availability Group configured in Synchronous-Commit Availability Mode<<<<But I think the correct answer should be Two servers configured in Windows Failover Cluster in the same data center SQL Server configured as a clustered instance<<<<
I've created initial indexes for my table for the fuzzylookup process. I clicked on "Maintained index" but I don't see any triggers created on the reference table.
Do I create the triggers to maintain indexes myself?
Does anybody know how to create these triggers in terms of schema_name, Data_Modification_Statements etc.?
Would it be "Alter index <index name> REBUILD command?
I had Excel file input & import to DB Table by using Data flow in SSIS.but it had duplicates so I dont use the Dupe Records
So I planned like below:
Method 1: Here OLEDB Destination are Good Records(Without Duplicates) OLEDB Destination are Not Good Records(only Duplicates) or Method :2 If I add a column(GOOD_RECORD) in DB Table and Should I update '1' for top 1 record (for Good Record) and remaining as '0' for other Records (for Dups)latter I utilize Through flag of GOOD_RECORD
i.e.,, select * from DB_TABLE where GOOD_RECORD='1' .
I think that Method :2 Advisable for Performance/flexible but Here How can I update by using SSIS(Data flow) ????
I need the application not to expire the session always. I need to maintain the session always even if the user keeps the web application idle for long time.
I wrote my own VB app to maintain all of my connection strings and link them to packages. I then grab them at run time and set them as variables in memory.
Based on the description below on average how many hours a month would it take to monitor and maintain the MSSQL Server databases?
Description of IT infrastructure.All Windows Servers and MSSQL Servers are up to date on patches and best practices.
Corporate site with 3 remote sites.
All remote sites have one DC and one MSSQL Server.
The corporate site has one MSSQL Server.
Replication is performed between the remote MSSQL databases and the corporate office MSSQL database.
There is no in-house DBA. All DBA services will have to be outsourced. I am trying to determine what is reasonable in budgeting for time involved for this service.
There is one project written in MS Access using Visual Basic for Applications (VBA) with the backend residing on these database.
The question is on average approximately how many hours a month would it take to monitor and maintain the health of the MSSQL Servers database by a MSSQL DBA. The DBA will not have to create any user reporting, queries, etc. Just maintain the existing MSSQL Servers database.
We replicate a SQL2000 database (DataBaseA) to a SQL2000 database (DataBaseB) by using the Restore function and hasn't change its logical name but only the physical data path and file name. It is running fine for a year. We use the same way to migrate the DataBaseB to a new SQL2005 server with the Restore function and the daily operation is running perfect. However, when we do the Backup of DatabaseB in the SQL2005, it just prompt the error message
System.Data.SqlClient.SqlError: The backup of full-text catalog 'DataBaseA' is not permitted because it is not online. Check errorlog file for the reason that full-text catalog became offline and bring it online. Or BACKUP can be performed by using the FILEGROUP or FILE clauses to restrict the selection to include only online data. (Microsoft.SqlServer.Smo)
Please note we left the DataBaseA in the old SQL2000 server.
Please help on how we can delete the Full-text catalog from DatabaseB so we can do a backup