Keeping Last 10 Entries By ID
Sep 24, 2006
Hello
my table :
Report :
R_id (PK)
RName
RDate
i am having a few 10.0000 lines and i want to keep the last 10 (or less if not in the table) rows maximum for each name
i can have 100 report by name (100 rows with the same name and of course R_id and RDate are different)
how can i do it ?
thanks a lot for helping
View 1 Replies
ADVERTISEMENT
Dec 2, 2013
If I wanted to search for Jobs as a particular status (e.g. 0130) and wanted to keep the jobs at this status until it has reached 0500, 0125, or 0900 in it's subsequent status log entry, how can I write the SQL for it to achieve it?
I have the following SQL which searches for the Jobs at 0130, but don't know how to develop it further to search on the requirement above.
------ SQL -------
SELECT
job.job_number,
(SELECT MAX(jsl.job_log_number)
FROM job_status_log jsl
WHERE
job.job_number = jsl.job_number AND
jsl.status_code = '0130') as Last_Early_Warning_Status_Entry
[code].....
In the job_status_log table above, there is a job_log_number field which increments by 1 when there is a new status log entry.
View 1 Replies
View Related
Aug 26, 2002
SQl7, sp3, NT4
How do I keep th job history of a job, say if I re-create the job?
We recreate the jobs often as part of a code move, but I'd like to retain the history of the previous jobs?
** sp_help_jobhistory -- only shows the jobs that exist, and not old jobs that no longer reside on the server.
Thanks,
AJ
View 1 Replies
View Related
Jan 10, 2005
Hi all, here are my goals: Have the same DB on two different stand-alone computers, and keep them up-to-date from each other.
Basically a user would input to a DB for a week. Then every week or two, update the other stand alone DB with the new input. The DB would be exactly the same.
What are my options for this? I'd like it as easy as possible! Are there any software packages that deal with this type of transfer, etc.? Thank you!
View 5 Replies
View Related
May 23, 2008
Hi, I have the following code in a query:
SELECT [Issue date],DATEDIFF("dd",[Issue date],[Start date])/365 AS runningdays
FROM Database1..[Insurance Policies Working DB]
WHERE [Policy Number] LIKE '%1368529%'
The part 'DATEDIFF("dd",[Issue date],[Start date])' comes out as 364 if calculated on its own. However, then when it is divided by 365 it comes out as 0. How do I get it to show as a decimal instead of just rounding it down automatically? (Hope I've made sense)
Thanks in advance
View 3 Replies
View Related
Nov 28, 2003
we have a simple table
Key, Name, Address, City, State, Zip ................ect
I would like to keep this table sorted by Name, theirfore I won't have to sort my results with every querry.
I think I need to add something to my insert to tell my table - "Hay take Jones", open up the prober place and stick him in the proper spot.
Ex: We have Appleby and Robertson in our table now. My insert would tell SQL Server to take Jones, figure our where he belongs (alpha), and stick him in, resulting in.
Appleby
Jones
Robertson
This way I wont have to as the querry to sort stuff every time I reference this table, this will save lots and lots of overhead. and help keep my clients happy with quick(er) response.
thanks in advance -arthur
View 3 Replies
View Related
Nov 7, 2001
Hi
We need to keep track of all changes that are made to our tables.
The changes will be saved in a table that records:
- the table in which the change was made
- the name of the field that was changed
- the old data for the field
- the new data for the field etc..
I've seen a few examples that record the name of the table that was
modified but none that record done to the field level.
Can anybody give some guidance?
Thanks..
Wayne
View 1 Replies
View Related
Feb 5, 2004
I need to update a row but keep a lock on the table (so no one else can update it) while I do run some more code. In Oracle, it always locks whatever you update until you hit commit, but sql server works opposite. How do I tell it not to commit a statement, or how would I explicitly get a lock and then release it later?
View 4 Replies
View Related
Feb 27, 2007
hi, is there any possiblity to keep break point and run the step by step compilation as like we do in .net,if means please telll me how to do?
View 3 Replies
View Related
Nov 13, 2007
Hello All,
I have a problem concerning keeping track of a value within a query.
I have a table that tracks invoices recieved and payments made.
For each invoice number there may be multiple payments made against it.
I need something that will check and make sure that each invoice number has its payments equal to its received amount.
Any help would be greatly appreciated.
Thanks,
View 5 Replies
View Related
Oct 16, 2006
Hello everyone,
I have a winform application with C# front end and sql express 05 backend.
In this database i have a table that holds manufacturer provided pricing and the manufacturers we work with update pricing constantly.
We have one table called "manufacturerpricing" which we are constantly inserting and deleting pricing records to/from to keep manufacturer pricing up to date. We may insert and delete as many as 2,000,000 records per month into this table.
This works perfectly fine and we have no problems here at all.
But with that being said, I am worried about the size of the database growing out of control due to temporary space etc. The database just keeps getting bigger and bigger.
How do I run some maintenance to keep the database size under control.
I would like to run this automatically from the C# front end so if ther is a stored proc I can call or an C# assembly I can reference that would be ideal.
Any help is greatly appreciated.
View 3 Replies
View Related
Dec 14, 2006
Hi,
I'd like to keep state between calls to a UDF (mainly for caching purposes). I can shove an object into the appdomain using SetData and read it using GetData, but that requires the assembly to be set to UNSAFE. I'm confident I can secure the DB and the assembly fairly well, but I like defense in depth, and if there's another way to save state between calls to a UDF, I would prefer those.
Is there another way to store state between calls to a UDF, without putting data into DB tables or using things that will require the assembly to have such a wide permission set?
Thanks,
Alex
View 4 Replies
View Related
May 8, 2007
Let's say I develop a 'version 1' of a program/database and ship it to some customers.
Then I continue work on version 2 with some databasechanges etc.
When version 2 is shipped to the customers, they need a way to upgrade their database
from version 1 to version 2. I guess this is easiest done with one/several sql-scripts.
What I want is your opinion about the best way to keep track of the changes between
version 1 and 2. Here are two ways with pros and cons I've thought of:
1 - Keep manual track of every change (index, changed columns, relations etc) and update a scriptfile with all changes.
Pros: Full control of what is happening
Cons: Pretty much extra work and risk of missing some changes.
2 - When version 2 is ready, run some third party tool to get a diff between the databases
Pros: Fast and easy
Cons: The DB-changes may need to be applied in a special order, with default values, etc etc and I'm not sure all tools handle this in a good way...?
So... what is your recomendations? I'm sure this must be a headace in every development project?
Regards Andreas
View 13 Replies
View Related
Jan 24, 2008
This may be more of a data design question and not an ssis question, but figured folks here could have a good idea.....the organization I'm in has the business need of collecting data from outside organizations and tracking what data is bad and what data is good. When I say bad data I mean everything from things outside of range to absolute *** - characters in integer columns, integers in character columns, special characters, etc. The data comes in in the form of flat file so it's a free for all until it hits ssis & the db engine.
Eventually of course they work to get the data corrected at the source & resubmitted but in the meantime, they have the legitimate need of not only pushing the data into the database (dirty or not), but keeping all the bad stuff. I can't in good conscience make everything a varchar to catch everything - that would go against the database gods. IMO - I still must make an integer be an integer , characters are characters, etc. But what do I do with the junk? Any thoughts?
View 4 Replies
View Related
May 16, 2006
I am newish to databases and would appreciate some
advise. I think I have a solution to my
problem but it is going to take me a lot of time to get it running. If there is a better way of doing it I would
like to know.
I have a table :-
“eventDates� with
columns (id, date, eventID,
eventCount)
The id auto increments as a Primary Key.
date holds the date of the
event.
EventID references another
table with info about the events
Up to 9 eventIDs can be added
for each date and I want eventCount to hold an integer (1 to 9) to allow me to “pivot�
the data to the table below
“results� with columns (date,
eventCount1, eventCount2 …..eventCount9) so each row will hold a date and non
to nine eventIDs occurring on that date.
Is there an easy way to keep eventCount accurate or do I
just have to write a lot of code? I
will need to be able to remove events as well as add them. I will use a mixture of stored procedures
and VB.Net I guess?
Many thanks for any advice.
Mike
View 4 Replies
View Related
Jul 2, 2001
Hi, I am trying to find a way to capture all the status (Start time, execution time, Status messages etc) from executing a DTS package in to a table I will create in a database, does anyone know, where those information being kept?
When I excute the DTS package manually, a window will come up and show the status of each step within the DTS package. I am hoping to capture these information and load it to my log table.
Thanks in advance.
View 1 Replies
View Related
Aug 17, 1999
How can I instruct SQL Server to keep entire table in memory? ie the memory pages should not be swapped to HD.
View 2 Replies
View Related
Sep 19, 2000
I'm exporting a large database. In enterprise manager the settings on
all of the PK/FK relationships are that
"Enable relationship for Insert AND Update" is UNchecked.
I need it this way, so I can delete and insert to the tables
without being hassled by THE MAN.
When I export the database, using DTS I export all the objects (EVERYTHING),
and all the data too. When I open the freshly copied database and get
properties on any relationship "Enable relationship for Insert AND Update"
is CHECKED! ARGH!
How do I keep this from happening? I'm so frustrated.
It is very time consuming to uncheck that darn box on hundreds
of relationships. Why doesn't it just stay the way it is set in
the original source DB ??
Is there a way to export a database and keep it EXACTLY the same?
If anyone can help me with this it would save me dozens of hours in work.
Thanks in advance.
Josh
View 10 Replies
View Related
Aug 28, 2015
I have a trigger that keeps track of status changes...
IF UPDATE(STATUS)
BEGIN
DECLARE @currentdate datetime
DECLARE @currentstatus integer
DECLARE @UserID integer
DECLARE @PermitID integer
DECLARE @Status integer
[Code] .....
It works but not the way I want it to. The @currentstatus and @newstatus are the same. I want the status before and after the update. I asked around as to how to do this and some one told me to use the Deleted table.
View 3 Replies
View Related
May 14, 2004
I have just finished upsizing an Access database to SQL Server 2k. Now the SQL Server need to be run on a test basis to determine if i need to make more changes to the front-end (Access). The problem I am facing is how to keep the two databases in sync while I am testing. Any suggestions?
Also any suggestion or comments on how to run a test setup like this (in parrallel) are also welcome since this is my first time attempting a project like this.
Let me know if anyone needs more info.
Thanks in advance.
View 1 Replies
View Related
Apr 10, 2008
Hi all
I have a stored procedures with few DML stmts inside.
I need to log the success/failure of each DML statment into seperate audit table as follows
Begin
DML Stmt1
Log sucess/failure (Insert stmt to audit table)
DML Stm2
Log sucess/failure (Insert stmt to audit table)
end
when any of the above DML stmt fails, the entire transaction is getting rolled back including the audit information.
How can i keep the audit info intact when the transaction is rolled back.
Thanks.
View 1 Replies
View Related
May 13, 2008
Hey all,
I'm trying to set up a backup schedule in SQL 2005 that will keep 5 .bak files within the folder I specify. I can get it to keep one backup only even though I specify that it is to keep 5 days worth of backups. Each backup happens at 12:00 am. I think the problem I may be having is I don't know how to get the maintenance plan to name the file mmddyyyy.bak or whatever it may need to determine that the file is 5 days old. Currently I am specifying a name of gmsqlbackup.bak. Any idea's that may help me out?
View 2 Replies
View Related
Jun 18, 2008
I have a report that pulls a customer balance.
In crystal there was a way to have the page not show up if it met a certain criteria(say if the balance was 0 or negative).
I'd rather not filter them in SQL because it takes a few calculations to figure out what their balance is, and I already have SRS doing that calculation.
So is there a way to have a report page not print based on a certain criteria?
View 4 Replies
View Related
Feb 4, 2008
i'm importing tables from another sql server -
now the tables are coming is as
esther.tablename (my username on the other server)
instead of
dbo.tablename
how can i change this?
View 9 Replies
View Related
Jul 23, 2005
This may not be a MSSQL-specific question, butI wanted to ask it here first, in case there'sa MSSQL and/or SourceSafe solution that will help.Our dev team is having some difficulty withkeeping the nightly builds in sync with thestored proc mods. I'm wondering if there aresome good case studies on how to avoid this"drift". Something like genning a new DB fromchecked-in SPs, etc. alongside each regular build,then always have a paired enterprise app/databaseduo that is tagged and added to a history.FWIW, we have a 3-tier .NET/C# app, andADO.NET is throwing exceptions every otherday.If the suggestion is to whip the DB guys, thatworks for me as well. ;-)Nah, there's much love there.Thanks in advance,~swooz
View 2 Replies
View Related
Jul 5, 2006
I'm actually taking Microsoft's 2779 and just finished a lab where wekept track of our changes to the database.However, I'm not happy with the scripts interface because it does nottell me the chronological order of my changes to the database.Could someone share with me their technique for keeping track ofdatabase changes?I'm actually thinking a set of tables would be best, because sometimesyou want to know what database object you made a change to and othertimes you want to know when you did something...
View 2 Replies
View Related
Jul 20, 2005
Hi guys,I'm trying to compose a query that will select all columns from atable, but without any duplicates.E.g.table name is 'tblShipment'columns are:fldUnique(pk) | fldShipNo | fldDate | fldValue001 | 123 | 02/02/02 | 2000002 | 222 | 01/01/01 | 3000003 | 123 | 03/03/03 | 4000004 | 444 | 04/04/04 | 5000I want to be able to select all columns (4 columns) withoutduplicates, which should give me a recordset:fldUnique(pk) | fldShipNo | fldDate | fldValue001 | 123 | 02/02/02 | 2000002 | 222 | 01/01/01 | 3000004 | 444 | 04/04/04 | 5000** 003 is not selected becoz it's fldShipNo is identical to that of001.How should I go about querying that?I've tried to select it with DISTINCT, but i'm unable to use DISTINCTto select all columns. It is important that I have all columnsselected, especially the primary key (pk), becoz I need it forreference.I'm really having trouble trying to achieve this, so I hope someonecan help me out.Thanks,Shawn
View 3 Replies
View Related
Feb 21, 2008
Hello,
I currently have a report that has one table with rows that "can grow". When the report is generated, everything works accordingly, but if the row is too long, the row does not get cut in half, but rather, the whole row gets moved over to the next page. When this happens there ends up being a big white space from the last row to the end of the page, then the next row continues on on the next page. What I want is the row to be split if there is too much data instead of trying to keep the row in one peice. Is there an option to do this? I already have keep together unchecked.
Thanks.
View 12 Replies
View Related
Dec 10, 2007
Is it possible to keep a Cached Lookup in memory when executing multiple Data Flows? Executing DFT€™s in parallel will cache and use the same LOOKUP statement. But what if I€™m executing the DFT sequentially, can I keep the LOOKUP from the first DFT in memory for the second DFT? For example, in my case, I€™m caching a lookup against the Customer dimension for invoices. The second DFT then processes credits and again does a lookup against the Customer dimension. I want to use the cached Customer records from the first DFT.
View 1 Replies
View Related
Apr 11, 2007
I was wondering if anybody knows how to solve this problem. Here's the setup.
There is an ASP.NET application running on a local web server at the customer's location, it currently uses a MSDE backend database. There is a copy of the application on the customer's webhosting company so it can be accessed from outside the customer's location it is running on a full version of SQL Server 2000. We have this setup because the customer's ISP is not very reliable and the customer needs to be able to use the application even when their web connection is down. It is also used from outside their location by sales people and management and remote offices. The problem is we want to keep both databases synchronized together. We had been using Merge Replication which was working fine until the local ID jumped because it had run out of allocated identities. This causes a problem for their accounting because now there is a gap in the document's numbers.
Is there a way to have the identity field (or a generated document number) to remain continuous and unique across both databases? This needs to also work if one of the databases were to go down for a time or lose connectivity between the two servers. I'm looking for any option. We also have the option of upgrading the application to SQL Server 2005 if needed. Any ideas are appreciated.
Thanks,
Seth
View 6 Replies
View Related
Sep 20, 2006
I have asked this question before and got some great answers, I just wanted to ask this again. I can detect dups easily, is there a way to get a total number of all dups so that I can delete a certain number at a time, then check the total again for verification that that specific number of dups are gone? I'm still using 2000. If I delete dups, won't it delete the 1 row I want to keep? And I do have a unique identity column.
thx,
Kat
View 3 Replies
View Related
Jul 5, 2007
Hi,I noticed that some of my Stored Procedure is not working well on the production server cause it seems that it has not the same DateTime format...(amongst other things ).Example of differences between DateTime format :-----------------------------------------------------------------------My station (SQL Express 2005) : 26/06/2007 3:17:20 PMProduction server (SQL 2005 Enteprise) : 2007-06-26 15:17:20How can i make the format of the database unchangeable so it keep the same format on the production server?I use SQL Express 2005 for coding and SQL 2005 Enteprise for production.Thanks
View 5 Replies
View Related
Apr 17, 2008
Hi All,
I'd like to throw this idea 'out there' to see if I'm missing something I'll later regret.
I'm looking to resolve a scalability issue within our point-of-sale program. Currently the PK on transactional tables (sales and orders) is created by the application layer using a 'MAX(PKCol) + 1' mechanism. Obviously this requires that all users of the system, whether they're local or remote, have current data at any time they wish to insert. It's this limitation I'd like to remove. Most sites are using MS SQL Server 2000. No sites use anything specific to a later version.
By having a PK that can be generated independently of a 'master' database we can overcome this issue. The PK values will need to be unique within a 'group' of shops and able to be generated by a program operating at any level. From 'head office' which manages a number of shops, to the server at a given shop and even the register / till itself should be able to create ID's while disconnected from the server (using a local database).
It seems there's three main ways to accomplish this:
- Identities,
- MachineID, CurrentPK composite.
- GUID's
Identities: I've ruled out identities as I believe the administration overhead of dealing with them makes them impractical (there may be several hundred registers and therefore as many ranges to be set up within a group).
MachineID, CurrentPK composite: The MachineID references a Machine table which has an entry for each ethernet MAC address which connects to the database. The reason I chose to store the MAC in another table rather than simply using it as column is that I'm fetching it from sysprocesses.net_address(nchar(12)) and believe it's computationally cheaper to use an int than a text column. This mechanism means that we can still expose the PK to the user in some cases (eg: InvoiceNumber printed on a receipt). When the local database is not up to date (usually due to network problems) there will be cases where the CurrentPK will be duplicated but kept unique since it's coupled with the new MachineID. The big drawback to this method is that all current code will need to be revised to deal with the composite keys (this will be a significant amount of development).
GUIDs: Ugly to look at and time-consuming to type. They're not something which you'd expose to a user unmodified so realistically this means altering existing code to use a new 'user friendly' number where the PK is currently exposed to them. The use of GUIDs rule-out the use of clustered indecies on tables they're the PK for lest most inserts cause a page split. The splits would also necessitate more frequent index defrags / rebuilds. Using a non-clustered index incurs a penalty Vs a non-fragmented clustered one (doesn't it?) so while this avoids page-splits it comes at a cost.
After all that I think the best solution is to use GUIDs with a non-clustered index for each of the PK's. While it might not be the fastest of the options (slower reads/joins Vs composite PK) it will be significantly faster to develop while maintaining acceptable performance.
Thoughts?
View 14 Replies
View Related