I am newish to databases and would appreciate some
advise.
I think I have a solution to my
problem but it is going to take me a lot of time to get it running.
If there is a better way of doing it I would
like to know.
I have a table :-
“eventDates� with
columns (id,
date,
eventID,
eventCount)
The id auto increments as a Primary Key.
date holds the date of the
event.
EventID references another
table with info about the events
Up to 9
eventIDs can be added
for each date and I want eventCount to hold an integer (1 to 9) to allow me to “pivot�
the data to the table below
“results� with columns (
date,
eventCount1, eventCount2 …..eventCount9) so each row will hold a date and non
to nine
eventIDs occurring on that date.
Is there an easy way to keep eventCount accurate or do I
just have to write a lot of code?
I
will need to be able to remove events as well as add them.
I will use a mixture of stored procedures
and VB.Net I guess?
I would like to AUTOMATICALLY count the event for the month BEFORE today
and
count the events remaining in the month (including those for today).
I can count the events remaining in the month manually with this query (today being March 20):
SELECT Count(EventID) AS [Left for Month], FROM RECalendar WHERE (EventTimeBegin >= DATEADD(DAY, 1, (CONVERT(char(10), GETDATE(), 101))) AND EventTimeBegin < DATEADD(DAY, 12, (CONVERT(char(10), GETDATE(), 101))))
Could anyone provide me with the correct syntax to count the events for the current month before today
and
to count the events remaining in the month, including today.
When looking at the data some days have multiple events. Now I want to generate a new table that show all the dates in this month showing the number of running events for that specific day.
Hello experts. I have been searching for anything about this but found very little. What are the events logged in SQL Server Error Logs aside from Successful/Failed Login, Backup/Restore/Recover database and, start/init sql server? Can we configure this to log other events, like CREATE or DBCC events for example? If so, how? Thanks a lot.
According to MS, GetLocalTime() (in C++) is only accurate to approx asecond,even though it reports milliseconds, and calling it twice and computingtheinterval can on occasion lead to a negative interval.Is T-SQL's GetDate() more accurate than that, or at leastnon-decreasing?Thanks,Jim
I'm just trying out this new try-catch stuff in sql server 2005....
Using the example from the help, "Using TRY... CATCH in Transact-SQL" , it shows how things are done in the adventure works database.. loggin the error to the errorlog table and all that... looks great.. but I notice that when I implement this code in my project.. and tested by putting in a line that causes a divide by zero error... that the line number reported by error_line is acutally not the line at which the divide by zero code resides....
Any suggestions as to what would put the error_line out of whack? I have comments and some string literals in the code..would they be throwing it?
I do hourly transaction log backups at 9,10, 11 etc... When I restore from a 9:00 backup I clearly see changes that I made after 9:00 am!!! I then noticed when I go to my scheduled backups that a 10 am backup was indeed done but in the "restore from device" tab it says the last backup was at 9 am. Apparently it is not showing the actual latest backup that was done. This explains why when restoring from a 9am backup I am seeing changes after 9, because in reality I am restoring from a 10 am backup!
I want to test some times it takes for a proc to run. Since I work for a large company, I rarely have access to the SQL server in a capacity where I can use profiler and such. Are there any quick and easy ways to just surround blocks of code in a T-SQL statement to get an accurate reading on how long it takes?I.e., if I surround with GETDATE() before and after, does that also measure the round trips to the server, or just the execution time.I want to just compare some different methods and see what is quicker. THX.
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
With the function below, I receive this error:Error:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.Function:Public Shared Function DeleteMesssages(ByVal UserID As String, ByVal MessageIDs As List(Of String)) As Boolean Dim bSuccess As Boolean Dim MyConnection As SqlConnection = GetConnection() Dim cmd As New SqlCommand("", MyConnection) Dim i As Integer Dim fBeginTransCalled As Boolean = False 'messagetype 1 =internal messages Try ' ' Start transaction ' MyConnection.Open() cmd.CommandText = "BEGIN TRANSACTION" cmd.ExecuteNonQuery() fBeginTransCalled = True Dim obj As Object For i = 0 To MessageIDs.Count - 1 bSuccess = False 'delete userid-message reference cmd.CommandText = "DELETE FROM tblUsersAndMessages WHERE MessageID=@MessageID AND UserID=@UserID" cmd.Parameters.Add(New SqlParameter("@UserID", UserID)) cmd.Parameters.Add(New SqlParameter("@MessageID", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() 'then delete the message itself if no other user has a reference cmd.CommandText = "SELECT COUNT(*) FROM tblUsersAndMessages WHERE MessageID=@MessageID1" cmd.Parameters.Add(New SqlParameter("@MessageID1", MessageIDs(i).ToString)) obj = cmd.ExecuteScalar If ((Not (obj) Is Nothing) _ AndAlso ((TypeOf (obj) Is Integer) _ AndAlso (CType(obj, Integer) > 0))) Then 'more references exist so do not delete message Else 'this is the only reference to the message so delete it permanently cmd.CommandText = "DELETE FROM tblMessages WHERE MessageID=@MessageID2" cmd.Parameters.Add(New SqlParameter("@MessageID2", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() End If Next i ' ' End transaction ' cmd.CommandText = "COMMIT TRANSACTION" cmd.ExecuteNonQuery() bSuccess = True fBeginTransCalled = False Catch ex As Exception 'LOG ERROR GlobalFunctions.ReportError("MessageDAL:DeleteMessages", ex.Message) Finally If fBeginTransCalled Then Try cmd = New SqlCommand("ROLLBACK TRANSACTION", MyConnection) cmd.ExecuteNonQuery() Catch e As System.Exception End Try End If MyConnection.Close() End Try Return bSuccess End Function
Hi all, here are my goals: Have the same DB on two different stand-alone computers, and keep them up-to-date from each other.
Basically a user would input to a DB for a week. Then every week or two, update the other stand alone DB with the new input. The DB would be exactly the same.
What are my options for this? I'd like it as easy as possible! Are there any software packages that deal with this type of transfer, etc.? Thank you!
SELECT [Issue date],DATEDIFF("dd",[Issue date],[Start date])/365 AS runningdays FROM Database1..[Insurance Policies Working DB] WHERE [Policy Number] LIKE '%1368529%'
The part 'DATEDIFF("dd",[Issue date],[Start date])' comes out as 364 if calculated on its own. However, then when it is divided by 365 it comes out as 0. How do I get it to show as a decimal instead of just rounding it down automatically? (Hope I've made sense)
Key, Name, Address, City, State, Zip ................ect
I would like to keep this table sorted by Name, theirfore I won't have to sort my results with every querry.
I think I need to add something to my insert to tell my table - "Hay take Jones", open up the prober place and stick him in the proper spot.
Ex: We have Appleby and Robertson in our table now. My insert would tell SQL Server to take Jones, figure our where he belongs (alpha), and stick him in, resulting in.
Appleby Jones Robertson
This way I wont have to as the querry to sort stuff every time I reference this table, this will save lots and lots of overhead. and help keep my clients happy with quick(er) response.
I need to update a row but keep a lock on the table (so no one else can update it) while I do run some more code. In Oracle, it always locks whatever you update until you hit commit, but sql server works opposite. How do I tell it not to commit a statement, or how would I explicitly get a lock and then release it later?
I have a problem concerning keeping track of a value within a query. I have a table that tracks invoices recieved and payments made. For each invoice number there may be multiple payments made against it. I need something that will check and make sure that each invoice number has its payments equal to its received amount.
I have a winform application with C# front end and sql express 05 backend.
In this database i have a table that holds manufacturer provided pricing and the manufacturers we work with update pricing constantly.
We have one table called "manufacturerpricing" which we are constantly inserting and deleting pricing records to/from to keep manufacturer pricing up to date. We may insert and delete as many as 2,000,000 records per month into this table.
This works perfectly fine and we have no problems here at all.
But with that being said, I am worried about the size of the database growing out of control due to temporary space etc. The database just keeps getting bigger and bigger.
How do I run some maintenance to keep the database size under control.
I would like to run this automatically from the C# front end so if ther is a stored proc I can call or an C# assembly I can reference that would be ideal.
I'd like to keep state between calls to a UDF (mainly for caching purposes). I can shove an object into the appdomain using SetData and read it using GetData, but that requires the assembly to be set to UNSAFE. I'm confident I can secure the DB and the assembly fairly well, but I like defense in depth, and if there's another way to save state between calls to a UDF, I would prefer those.
Is there another way to store state between calls to a UDF, without putting data into DB tables or using things that will require the assembly to have such a wide permission set?
This may be more of a data design question and not an ssis question, but figured folks here could have a good idea.....the organization I'm in has the business need of collecting data from outside organizations and tracking what data is bad and what data is good. When I say bad data I mean everything from things outside of range to absolute *** - characters in integer columns, integers in character columns, special characters, etc. The data comes in in the form of flat file so it's a free for all until it hits ssis & the db engine.
Eventually of course they work to get the data corrected at the source & resubmitted but in the meantime, they have the legitimate need of not only pushing the data into the database (dirty or not), but keeping all the bad stuff. I can't in good conscience make everything a varchar to catch everything - that would go against the database gods. IMO - I still must make an integer be an integer , characters are characters, etc. But what do I do with the junk? Any thoughts?
Hi, I am trying to find a way to capture all the status (Start time, execution time, Status messages etc) from executing a DTS package in to a table I will create in a database, does anyone know, where those information being kept? When I excute the DTS package manually, a window will come up and show the status of each step within the DTS package. I am hoping to capture these information and load it to my log table.
I'm exporting a large database. In enterprise manager the settings on all of the PK/FK relationships are that "Enable relationship for Insert AND Update" is UNchecked. I need it this way, so I can delete and insert to the tables without being hassled by THE MAN.
When I export the database, using DTS I export all the objects (EVERYTHING), and all the data too. When I open the freshly copied database and get properties on any relationship "Enable relationship for Insert AND Update" is CHECKED! ARGH!
How do I keep this from happening? I'm so frustrated. It is very time consuming to uncheck that darn box on hundreds of relationships. Why doesn't it just stay the way it is set in the original source DB ??
Is there a way to export a database and keep it EXACTLY the same?
If anyone can help me with this it would save me dozens of hours in work. Thanks in advance.
I have a trigger that keeps track of status changes...
IF UPDATE(STATUS) BEGIN DECLARE @currentdate datetime DECLARE @currentstatus integer DECLARE @UserID integer DECLARE @PermitID integer DECLARE @Status integer
[Code] .....
It works but not the way I want it to. The @currentstatus and @newstatus are the same. I want the status before and after the update. I asked around as to how to do this and some one told me to use the Deleted table.