I got a request to truncate some tables on our testing servers.There are only 11 tables and i could go in and truncate them one after the other, but i need a script that i can use to truncate all the tables at a go. The tables also have no dependencies between them. A script to accomplish the same task if the tables had dependencies ...
We have many customers with a large database, many workers and many database transactions. Every night the databases are "full backupped". Regularly the logfiles (dbname_log.ldf) grow very large, for example a database (dbname.mdf) of 60GB and a logfile equal in size. When the drive becomes too full, I truncate these logfiles to free diskspace.
Question: is it a good decision to run a script that, after the daily full backup, truncates the logfile if it is larger than for example 10GB?
I have 3 tables , Customer , Sales Cost Charge and Sales Price , i have join the customer table to the sales price table with a left outer join into a new table.
i now need to join the data in the new table to sales cost charge. However please note that there is data that is in the sales price table that is not in the sales cost charge table and there is data in the sales cost charge table that is not in the sales price table ,but i need to get all the data. e.g. if on our application it shows 15 records , the sales price table will maybe have 7 records and the sales cost charge table will have 8 which makes it 15 records
I am struggling to match the records , i have also tried a left outer join to the sales cost charge table however i only get the 7 records which is in the sales price table. see code below
I have a production database which uses merge and snapshot replication. The Merge is for 3 tables. The snapshot is to update the rest of the data once daily. I use a Full recovery model and perform database backups (full, differential) and transaction log backups.
I have a database optimization plan which runs 4 times a week. This plan performs and integrity check and rebuilds the indexes. This optimization plan is growing the transaction log by about 8MB each time it is run and we are running out of space on the drive for our log files. The space is not being reused.
I saw in another post where Gail Shaw suggesting using SELECT name, log_reuse_wait_desc FROM master.sys.databases to see why the log space is not being reused. On the database in question, the above returns "REPLICATION".
A colleague tried to backup the transaction log a couple of times to truncate the log this weekend. She was going to perform a DBCC Shrinkfile command afterwards. But the truncate failed. Again looking into things it seems replication prevented the truncation.
We are looking at stopping the merge replication or even removing it to truncate the log file and then recreate the merge replication. How to handle shrinking the log file for now and then seeing if there are any checks or changes I can perform which will allow the transaction log space to be reused.
Hi, Does anybody have a script for truncating all the tables in an database?????? I have an user request asking me to restore an database into a new database,and to truncate all the data,leaving the structure and objects intact.I tried using DTS,etc,etc,but i'm running into Foriegn key problems/login problems,etc.Any new ideas???
I am trying to pull in columns from multiple tables but am getting an error when I run the code:
Msg 4104, Level 16, State 1, Line 1
The multi-part identifier "a.BOC" could not be bound.
I am guessing that my syntax is completely off.
SELECT b.[PBCat] ,c.[VISN] --- I am trying to pull in the Column [VISN] from the Table [DIMSTA]. Current Status: --Failure ,a.[Station] ,a.[Facility] ,a.[CC] ,a.[Office]
Apparently the dependencies are not calculated correctly because I'm using a temporary table.
My problem is that I have updated this stored procedures (and the two other that I call) to add a new parameter. As a consequence, when I do a replication, this is failing saying that I have an extra parameter. I imagine that because my dependencies are not correct, the replication is not occuring in the correct order and so it's still using the old definition of the stored procedure.
Do you have any idea on how I can force the dependencies to be calculated correctly ?
I have tried joining several tables and the result displays duplicate rows of virtually every line/row. I have tried using distinct but this didn't work. I know it could because there's several columns from some of the tables named the same.
I have two table studenTtable and courseTable which is each student take more than one course . 1:M...for example Student1 take 2 courses (C1 , C2). Student2 take 3 courses (C1,C2, C3).I need to create a table/View that contain student information from StudentTable plus all the courses and the score for each course from CoursTable in one row.
for example Row1= Student1_Id ,C1_code ,C1_name ,C1_Score ,C2_code,C2_name ,C2_Score Row2= Student2_Id,C1_code, C1_name,C1_Score,C2_code ,C2_name ,C2_Score , C3_code,C3_name,C3_Score
and since Student one just have two courses , I should enter NULL in 'Course 3 fields'.My Struggle is in the insert statement I tried the following but it show an error
Insert Into Newtable ( St_ID, C1_code,c1_name, C1_Score ,C2_code ,C2_name,C2_score,C3_code ,C3_name,C3_score) Select (Select St_ID from StudentTable) , (Select C_code,c_name,c_Score from Coursetable,SudentTable where course.Stid =Studet.stid) , (Select C_code,c_name,c_Score from course ,student where course.Stid =Studet.stid ), (Select C_code,c_name,c_Score from course ,student where course.Stid =Studet.stid );
I'm fully aware that the New table/View will break the rules of normalization ,but I need it for specifc purpose.I tried also the PIVOT BY functionality but no luck with it .I also tried writing a code using Matlab (because it is high level sw that it is easy to learn for people not expret in programming as me) but didn't know how to combine the Student and Courses Matrices in my loop.
OrderID ControlName 1 Row1COlumn1 (It Means Pant in Red Color is selected by user(relation with Child2 Table)) 1 Row3Column1 (It Means Gown in Blue Color is selected by user(relation with Child2 Table)) 1 Row4Column3 (It Means T Shirt in White Color is selected by user(relation with Child2 Table)) 2 Row1Column2 (It Means Tie in Green Color is selected by user(relation with Child2 Table)) 2 Row3Column1 (It Means Bow in Red Color is selected by user(relation with Child2 Table))
Child2 Table
PackageID Product Color1 Color2 Color3 1 Pant Red Green Blue 1 Shirt Blue Pink Purple 1 Gown Blue Black Yellow 1 T Shirt Red Green White 2 Tie Red Green White 2 Socks Red Green White 2 Bow Red Green White
We want to have result like
OrderID PackageID CustomerName Pant Gown T Shirt Tie Bow
1 1 ABC Red Blue White x x Blue 2 2 Bcd x x x Green Red
I have tried
;with mycte as ( select ms.OrderID,ms.PackageID ,ms.CustomerName , Replace(stuff([ControlName], charindex('Column',ControlName),len(ControlName),''),'Row','') rowNum ,Replace(stuff([ControlName], 1, charindex('Column',ControlName)-1 ,''),'Column','') columnNum From child1 c inner join MasterTable ms on c.Orderid=ms.orderid)
[code]....
it works if we have a product in one color only. like if we have pant in red and blue then its showing just first record
I need to write a code for remove dependencies between Table1.Prikey and Table2.Idand add dependencies between Table1.Prikey and Table3.Id how to write the code? please help .. thank you very much
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate) 2)Title(id, DocNo,Main_Title, Other_Title) 3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....) 4)Publisher(id,DocNo , Name,Publisedate,address) 5)Subject(id,DocNo,Subject) 6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
I need to consume a live data feed from a golf tournament. And by consume, I really mean insert (merge) into our own SQL Server database on a regular intervals as a tournament progresses.This site didn't let me upload an XML file, but you can see a sample of the data feed here: URL....
I need to insert this data into 2 tables, Player_Holes and Player_Shots. But while doing the insert, I need to lookup several things such as our player ID match to theirs on an external_id against the players table. The shot types translation, and some other logic about the process overall.
The columns in my player_holes tables are: id, player_id, hole_id, round, shots (this is a total # of strokes) and date_created/date_modified.Shots table is similar: id, player_id, hole_id, round, shot_number, shot_type_id, club, distance, date_created/date_modified.
The only way I know how to do it, is inefficient. I would parse the XML in ColdFusion (please no comments on ColdFusion, that's what we use for webdev), and then loop over it and do inserts for each player, each hole for each round, and the shots would probably be separate for each hole.
It would be so much better and more efficient if I could do it in SQL directly. I've done some research and SQL Server Data Tools looks promising. I've never used it, so would have to learn, but also I'm not sure if that'd work in this application when we want to run is as a scheduled task every few minutes.
i have database which has 25 tables. all tables have productid column. i need to find total records for product id = 20003 from all the tables in database.
Here is the desired counted output, I would like to pull distinct Date, MachineNumber, TestName and then count how many times they occur in the raw data form.I do need to perform a case on the date because right now its in a datetime format and I only need the date.
I am pulling three columns with the same names from 8 different tables. What I need to display the date, machine & test name and count how many times a test was run on a machine for that date. I have a feeling this can be handled by SSAS but haven't built an analysis cube yet because I am unfamiliar with how they work. I was wondering if this is possible in a simple query. I tried to set something up in a #Temp table. Problem is the query takes forever to run because I am dealing with 1.7 Million rows. Doing an insert into #temp select columnA, columnB, columnC from 8 different tables takes a bit.
I am trying to run an update statement against a vendor's database that houses HR information. If I run a regular select statement against the database with the following query, it returns without error:
SELECT "QUDDAT_DATA"."QUDDAT-INT", "NAME"."INTERNET-ADDRESS", "QUDDAT_DATA"."QUDFLD-FIELD-ID", "QUDDAT_DATA"."QUDTBL-TABLE-ID" FROM "SKYWARD"."PUB"."NAME" "NAME", "SKYWARD"."PUB"."QUDDAT-DATA" "QUDDAT_DATA" WHERE ("NAME"."NAME-ID"="QUDDAT_DATA"."QUDDAT-SRC-ID") AND "QUDDAT_DATA"."QUDTBL-TABLE-ID"=0 AND "QUDDAT_DATA"."QUDFLD-FIELD-ID"=16 AND "QUDDAT_DATA"."QUDDAT-INT"=11237When I try to convert it into an
[Code] ....
I am assuming I am receiving this error because it doesn't know where to find QUDDAT-INT? How can I fix that?
The "QUDDAT-INT" column houses the employee number. So in the case of the SELECT query above, I am testing against a specific employee number.
Hi! I have a general SQL CE v3.5 design question related to table/file layout. I have an system that has multiple tables that fall into categories of data access. The 3 categories of data access are:
1 is for configuration-related data. There is one application that will read/write to the data, and a second application that will read the data on startup.
1 is for high-performance temporal storage of data. The data objects are all the same type, but they are our own custom object and not just simple types.
1 is for logging where the data will be permanent - unless the configured size/recycling settings cause a resize or cleanup. There will be one application writing alot [potentially] of data depending on log settings, and another application searching/reading sections of data. When working with data and designing the layout, I like to approach things from a data-centric mindset, because this seems to result in a better performing system. That said, I am thinking about using 3 individual SDF files for the above data access scenarios - as opposed to a single SDF with multiple tables. I'm thinking this would provide better performance in SQL CE because the query engine will not have alot of different types of queries going against the same database file. For instance, the temporal storage is basically reading/writing/deleting various amounts of data. And, this is different from the logging, where the log can grow pretty large - definitely bigger than the default 128 MB. So, it seems logical to manage them separately.
I would greatly appreciate any suggestions from the SQL CE experts with regard to my approach. If there are any tips/tricks with respect to different data access scenarios - taking into account performance, type of data access, etc. - I would love to take a look at that.
I concatenate multiple rows from one table in multiple columns like this:
--Create Table CREATE TABLE [Person].[Person_1]( [BusinessEntityID] [int] NOT NULL, [PersonType] [nchar](2) NOT NULL, [FirstName] [varchar](100) NOT NULL, CONSTRAINT [PK_Person_BusinessEntityID_1] PRIMARY KEY CLUSTERED
[Code] ....
This works very well, but I want to concatenate more rows with different [PersonType]-Values in different columns and I don't like the overhead, of using the same table in every subquery ([Person_1]). Is there a more elegant way to do this, without using a temp table or something else?
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
TRANAMT being the amount paid & TOTBAL being the balance due per the NAMEID & RMPROPID specified.The other table includes a breakdown of the total balance, in a manner of speaking, by charge code (thru a SUM(OPENAMT) query of DISTINCT CHGCODE
Also with a remaining balance (per CHGCODE) column. Any alternative solution that would effectively split the TABLE1.TRANAMT up into the respective TABLE2.CHGCODE balances? Either way, I can't figure out how to word the queries.
I have just taken over the job of sorting out a rather poorly designed database. It looks like it was 'upsized' from an access database to the SQL server. The SQL server is the 2000 version.
Now I am trying to generate a report of what the students in the database are owing by referencing the Receipt table and then all the available payment methods and allocations. I was wondering if there was anyway to work out data being displayed twice (Let me demonstrate)
Note1: All the tables are linked by a key of ReceiptNo. From what I can see there is a table for every payment type and allocation but no link between the two other then the receipt number.
Using the query: SELECT T_Receipt.ReceiptNo, T_cheque.Amount AS Chq_Amount, T_credit.Amount AS Cre_Amount, StandingOrder.Amount AS Stn_Amount, T_BankTransfer.amount AS Bnk_Amount, T_cash.TotalAmount AS Cas_Amount, T_RentPayment.AmountPayed AS Ren_Paid, T_AdminPayment.AmountPaid AS Adm_Paid, T_InternetBilling.Total AS Int_Paid, T_Utilities.AmountPaid AS Util_Amount, T_InvoicePayment.amountPaid AS Inv_Paid, T_OtherPayments.paymentAmount AS Oth_Paid, T_parkingBill.paymentAmount AS Prk_Paid, T_TelephoneBills.TelephoneCredit AS Tel_Paid, T_DepositPayment.[Deposit payment] AS Dep_Amount, T_Receipt.cancelled AS Canceled, T_Receipt.RemittanceReceiptNo AS Rec_Ref, T_Receipt.Student FROM T_Receipt INNER JOIN T_DepositPayment ON T_Receipt.ReceiptNo = T_DepositPayment.receiptNo LEFT OUTER JOIN T_RentPayment ON T_Receipt.ReceiptNo = T_RentPayment.RentPaymentNo LEFT OUTER JOIN StandingOrder ON T_Receipt.ReceiptNo = StandingOrder.ReceiptNo LEFT OUTER JOIN T_TelephoneBills ON T_Receipt.ReceiptNo = T_TelephoneBills.ReceiptNo LEFT OUTER JOIN T_parkingBill ON T_Receipt.ReceiptNo = T_parkingBill.ReceiptNo LEFT OUTER JOIN T_OtherPayments ON T_Receipt.ReceiptNo = T_OtherPayments.ReceiptNo LEFT OUTER JOIN T_InvoicePayment ON T_Receipt.ReceiptNo = T_InvoicePayment.receiptNo LEFT OUTER JOIN T_cash ON T_Receipt.ReceiptNo = T_cash.ReceiptNo LEFT OUTER JOIN T_AdminPayment ON T_Receipt.ReceiptNo = T_AdminPayment.ReceiptNo LEFT OUTER JOIN T_BankTransfer ON T_Receipt.ReceiptNo = T_BankTransfer.receiptNo LEFT OUTER JOIN T_Utilities ON T_Receipt.ReceiptNo = T_Utilities.receiptNo LEFT OUTER JOIN T_credit ON T_Receipt.ReceiptNo = T_credit.ReceiptNo LEFT OUTER JOIN T_cheque ON T_Receipt.ReceiptNo = T_cheque.ReceiptNo LEFT OUTER JOIN T_InternetBilling ON T_Receipt.ReceiptNo = T_InternetBilling.ReceiptNo GROUP BY T_Receipt.Student, T_Receipt.ReceiptNo, T_cheque.Amount, T_credit.Amount, StandingOrder.Amount, T_BankTransfer.amount, T_cash.TotalAmount, T_AdminPayment.AmountPaid, T_InternetBilling.Total, T_Utilities.AmountPaid, T_InvoicePayment.amountPaid, T_OtherPayments.paymentAmount, T_parkingBill.paymentAmount, T_TelephoneBills.TelephoneCredit, T_Receipt.cancelled, T_Receipt.RemittanceReceiptNo, T_DepositPayment.[Deposit payment], T_RentPayment.AmountPayed, T_Receipt.Student HAVING (T_Receipt.Student LIKE N'06%')
Which gives a result of:
RecNo. 30429 Cheque 250 Deposit 250
30429 679.98 250
This is fine but when I do analysis on this it appears as though the student has paid two deposit payments. I was wondering with out querying each table independently from an application if there was a criteria to specify that I only get one deposit result. So as such say, give me all the payments but I only want one result from the other tables. I though about discrete but that wouldn't work here.
I have 7 source databases and one target database, all using the same structure. The structure is made of 10 tables, with foreign key constraints.
I need to merge the source databases into the target (which won't have any data before that process, but will already have the correct schema), and to keep the relationships between the records.
I know how to iterate over the source databases (with SMO foreach), but I'd like to know if someone can advise the best copy method for that context in SSIS ? (I don't want to keep the primary keys, but I need to keep the relationships...)
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
Hello I am building a survey application. I have 8 questions. Textbox - Call reference Dropdownmenu - choose Support method Radio button lists - Customer satisfaction questions 1-5 Multiline textbox - other comments. I want to insert textbox, dropdown menu into a db table, then insert each question score into a score column with each question having an ID. I envisage to do this I will need an insert query for the textbox and dropdownlist and then an insert for each question based on ID and score. Please help me! Thanks Andrew