Hi to alll of you, i'm working in a project to save cars information, when the user who adds on the new record enter all the data, this will need to be printed with a particulary number, which needs to be unique, (lets take a passport number as an example) this generated number will takes some info from the filling fields, for example:
As part of the credit card process I have to supply a unique transaction id.
I thought the best way to do this would be to have a file which just holds a number which I grab and increment each time some one registers. Then if everyting is alright I will write the user's details a long with the transaction id to the User file.
My question is, if two people try to register and both try to grab the next number from the file at the same time, what will happen. Will one get a record lock message, or will it wait untill the other one has finished, or could two people end up with the same number under these circumstances?
I am of course open to suggestions on the best way of approaching this, but please bare in mind that I am only using Web Matrix.
Hallo,Hot to get unique, sequential number during executionof stored procedure ?I can create table with autoincrement column,add record, get ident_current and delete recordeach time i need the number.However its not elegant i guess.best regardspluton
Each change to a person's attributes results in a new row formed with the same PersonId as in the row with old attributes and the Date these new attributes are valid (DateFrom). So as shown above the Primary Key is a combination of the PersonId and DateFrom as a change to a person's attributes should never happen at the same time twice.
My problem is when I want to create a new person, how do I get a new unique id? Ideally I want the a new incremented id, so that all peoples' ids are in a sequential order.
I need to generate a random 10 digit alphanumeric string that is also unique within a table. My application will be calling a stored procedure to insert this number into the table. This number will be associated with a id from another table. Is it better to generate the random number within sql (and perform the lookup at the same time), then just pass the number back to the calling application ?
If the calling application generates the number, it will also need to make a call to check if its unique. So im thinking it would be best to simply have sql generate this random number, check the number against the table and then insert the new record.
I want to perform a query so that SQL should look for RandomNumber Values and set a Unique Random Number Where RandomNumber Value is Null or 0.So I have got a solution as one of the MSDN Member shared the below query
select id,item,RandomNumber=Case when RandomNumber=0 then (select floor(rand()*100000000-1)) when RandomNumber is null then (select floor(rand()*100000000-1)) else RandomNumber end from tblProducts
So, can you all confirm me, that performing this query ensures that if a Value is assigned to one of the Item in RandomNumber Column, that value will not be assignend to any other Item in RandoNumberColumn.
My team at work has spent the past week troubleshooting performance issues experienced by users of our asp.net 2.x web application. We've got a probe running on one of the web servers that has identified a frequently occuring error that no one has seen before and I can't find anywhere online. MSSQL error "system.data.oledb.oledbcommand.executenonquery(Maximum number of unique SQL exceeded) Has anyone here ever seen this error before?The web server, application, SQL servers and databases all seem to be configured properly, but users are experiencing latency and this frequently occurring error is a mystery to us.
What I need to do is re-populate a unique number into multiple fields,
Let me explain, An employee can appear in the first table only once but can be in the second table multiple times with multiple employee numbers .There is a field called TFN that is unique and we are using it to create a unique id called KRid so what I have done is created 2 tables namely TEST_TBL and TEST2_TBL . In TEST_TBL I am populating a KRid with a unique no being produced by the TFN field only once i.e 12345 being the resulting unique id number. If an employee has 2 employee numbers i.e empno 1 and empno 1000,only employee no 1 will have the unique KRid created but nothing for 1000 because the record already exists , so what has me stumped is that the TFN for employee empno 1 and the TFN for empno 1000 are the same. How do I get the KRid (12345 from empno 1) to populate empno 1000 in TEST2_TBL , The second table has all records in so I can group the second table by TFN id but how do I populate employee 1000 in the second table with the KRid 12345.
Please help!!!!! Below are how the tables are set up and an example of the result.
TABLE 1
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[TEST_TBL]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[TEST_TBL] GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[TEST2_TBL]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[TEST2_TBL] GO
SELECT NPE000.EmpNumber, NPET00.RecordStatus, NPE000.KR_ID, NPE000.Surname, NPE000.FirstName, NPE000.SecondName, NPE000.Class, NPE000.DateEmployed, NPE000.DateOfBirth, NPE000.HoursPerDay, NPE000.HoursPerWeek, NPE000.PassportNo, NPE000.AwardCode, NPE000.EmailPayslipTo, NPE000.Location, NPE000.Grade, NPE000.DateTerminated, NPE000.EmploymentType, NPE000.DistCode, NPE000.EmpStatus, NPET00.TaxRefNo FROM NPE000 NPE000, NPET00 NPET00 WHERE NPET00.RecordStatus = 0 and NPET00.TaxRefNo <> ' 111111111' and NPET00.TaxRefNo <> ' 000000000' AND LENGTH(NPET00.TaxRefNo) >= 9 AND LENGTH(NPE000.KR_ID) >= 0 AND NPE000.EmpNumber = NPET00.EmpNumber
Query goes as follows for table 2:
SELECT NPE000.EmpNumber, NPE000.FirstName, NPE000.Surname, NPE000.Class, NPE000.Location, NPE000.EmploymentType, NPE000.EmpStatus, NPET00.TaxRefNo, NPE000.Paypoint, NPE000.KR_ID, FROM NPE000, NPET00 WHERE Recordstatus = 0 and (EmploymentType = 1 AND EmpStatus = 1 AND NPE000.EmpNumber = NPET00.EmpNumber
From this you can see that in table 1 it will only create 1 KR_ID for only one employee number but in table 2 I am bringing through all employee records. In table 2 I can group by NPET00.TaxRefNo which will bring all NPET00.TaxRefNo's togeather. From that I would like to populate the other employee numbers with the unique KR_ID.
can someone pls show me a way to get an unique sequence at below senario:
PC1 & PC2 using their own local client progam to access to Database Server at SERVER1. In the SERVER1, there is a table SEQUENCE in a database DATABASE1. And the table's structure of SEQUENCE are SeqType & SeqNo. Here is the sample data:
SeqType SeqNo Invoice 100 DeliveryOrder 200
Now, how to prevent PC1 & PC2 to get a same Invoice No. if they request the Invoice No. at the same time? Is it possible to lock the record Invoice when i perform a SELECT statement, then i update the Invoice to 101, lastly release the lock for Invoice?
I have a situation where for a given customer, their invoices need to be sequentially numbered, without gaps.
Customer A Invoice 1,2,3,4 ... A-1 A-2 A-3 A-4
Customer B: B-1 B-2
etc.
The issue is 2 people creating an invoice for the same customer at the same time. Currently I don't assign the Invoice number until the user hits 'Save'. At that time I query for max(invoiceno) against the customer key and simply add 1. it's the last operation prior to saving against the backend.
If 2 users hit Save at the exact time, I'm getting the same (duplicated) Invoice Number.
What suggestions/techniques do you have to resolve this? Would "locking" of the customer record and storing the last highest invoice there play a part?
Below is my sample data. I can't figure out how to select Unique phonenumber contacts for the same Ranked values from the set.
Basically the table is a mix of contactIDs. Some of them have duplicate phone numbers and through a separate mechanism we have ranked them.
It's easier then to pull out max(ranked) CLI_Numbers and their counterpart contactID(s). But I am also getting 2 or more records where the rank happens to be the same. I don't want that. Any one of the contactID will do for me.
The table has also same cliNumbers with different rank values, which are then correctly being picked up in the query below.
Note: ContactId is a unique value for each person in the table. RecordID is simply RowID.
( I have attempted to populate a sample data suited for this forum - not sure how it comes out on the browser)
if object_id('tempdb..#MyData') is not null drop table #MyData create table #MyData ( RecordID int, contactID int, forename varchar(25), surname varchar(25),
[Code] ....
This is my query attempt
With RankedmobileDuplicateSet as( select cliNumber, max(ranked_value) as ranked_max_value from #temp_UK_mobiledata group by cliNumber)
I have a task to perform a new SQL Server 2005 installation at a client. They have a system with 10 external SCSI drives, each of 72GB. They only have one database of 80GB in use at this system.
1) Userdata: I think that I will put 5 disks at SCSI channel 1, with RAID 3 or 6, with formatted space of 140GB. 2) Log: I think that I will put 3 disks at SCSI channel 2, with RAID 5, with formatted space of 70GB. 3) TempDB: I think that I will put 2 disks at SCSI channel 3, with RAID 0, with formatted space of 140GB.
System already has two built-in drives (RAID 1) for operating system, where I think I will put system databases.
I am working with a client who has all databases, logs, tempdb and user/system databases on a single raid set. In order to make some speed improvement, we have decided to move log files to a second raid controller card.
To accomplish this, we are thinking about taking these steps:
1) Detach database XYZ. 2) Stop database service. 3) Create a new partition on new controller card. 4) Copy old log files from I: to new K: drive. 5) Remove drive letter I: from system. 6) Change drive letter K: to I: 7) Attach database
My question is, does SQL Server recognize the log files automatically, since they are placed at same logical position (i: drive)?
If an issue is resolved please note that the problem has been resolved. Because there are "many ways to skin a cat" it would be helpful to anyone else with a similar problem (or someone trying to learn) what the solution was.
Hello all, I have 2 primary key fields the ssn and refnum... if the data in the file is duplicated it will not import to my table rights even though i am using DTS to do my import, correct? or do I need to add an extra validator in there?
We have a project at work that requires us to run an msde database from cd. My sup. and I disagree on if this can be done or not. We currently have an application written in asp that uses an access db. We copy the *.mdb file and all the asp files to cd and use the app. to query the db. Now, we have a client who wants the same capability only using msde. They will not allow us to install the full version of sql on there machines. No one in the dept wants to re-write this app. so we're trying to figure out if we can make it work with msde. So, My first question is, does anyone think that it is possible to just copy the *.mdf file to cd (mind you WITHOUT the *.ldf file... can't write to a cd)andMy second question is, can a sql db run without both files in any situation(mdf,ldf)? (I believe this is impossible, but would like to here it from someone with more experience)Any comments are greatly appreciatedthanks
I am about to write my registration confirmation tables. Right now I have my registration form and table "Users" that receives input from user registration.
I want to do email confirmation of a registered account. I was thinking of doing this by creating a table "Verify" that is set to 0 if not verified and 1 if the user verifies his email address.
Then I started thinking about the original registration information. If a user "doesn't" verify, then there is all this dormant data in the "Users" database. So, maybe it is better to write all the registration data to the "Verify" table and then have SQL move that data to the "Users" table upon verification? This sounds a bit sloppy to me... is this how the email verification of data is done or, if not, can somebody suggest a best practice?
I need confirmation from you SQL Server experts out there. Please let me know if the following works. Thanks!
This stored procedure gets a value and increments by 1, but while it does this, I want to lock the table so no other processes can read the same value between the UPDATE and SELECT (of course, this may only happen in a fraction of a second, but I anticipate that we will have thousands of concurrent users). I need to manually increment this column because an identity column is not appropriate in this case.
BEGIN TRANSACTION
UPDATE forum WITH (TABLOCKX) SET forum_last_used_msg_id = forum_last_used_msg_id + 1 WHERE forum_id = @forum_id
SELECT @new_id = forum_last_used_msg_id FROM forum WHERE forum_id = @forum_id
How can we say whether the SP is successfully compiled or not if we are compiling it on the server as a part of the TSQL script since it does not throw any message like ORACLE does.
In oracle, system will let you know whether the the procedure is successfully complied or not?
How can I check "Native client" is present on my client? I'm trying to eliminate all the reasons my client won't run a VC++ app which access a sql server.
I have set up my confirmation system to work as thus:
1) user registers 2) registration goes into a temp table with a code 3) user gets a MD5 code in his inbox 4) user clicks back 5) click-back page moves registration data from TEMP table to USER table and sends the user to the signin page.
6) ...
Now, my question is - how do I handle this final step in account confirmation? Should the first signin act as a final confirmation of the users account? This makes sense. Should the data in the USER table self-delete after a day if there has been no first sign-in? Should I have a column in the USER table to show if the first login has happened? How should I do this?
I am sure I could mess around with this but it would be great to get feedback from somebody that has done this multiple times and has a sense of what the best practice is (based on large volume examples).
I am looking for some confirmation on a behavior of the SSIS Script Task. I have a custom script task that takes an input file, and archives it after it has been processed into the database.
When I run this package in the Visual Studio GUI, if the destination drive is full, it throws an exception telling me that there is not enough disk space. So, my questions are:
1) If this happens when the package is running through the command line, would this exception still be thrown? (I am thinking it will be)
2) Also, Do I need to explicitly fail the script task in the event handler, in order to ensure this .Net exception being thrown will cause the component to fail. (I am fairly certain I do, since this is what I had to do inside of the Visual Studio GUI, but does anyone know if this same behavior would occur when running from the command line?)
I have a table in which a non-primary key column has a unique index on it. If I am inserting a record into this table with a duplicate column value for the indexed column, then what will be the error number of the error in above scenario? OR How could I find this out?
need help how to archiv table to another table with unique number for all rows once + date time (not the second only day time +minute) i need whan i insert to the another table add 2 more fields (unique number , date_time )
this is the table 1 i select from ID fname new_date val_holiday ----------------------------------------------------
this is the table 2 i insert into ---------------------------------- ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------
for evry archiv table to another table (insert) i need to get a unique number + date time (not the second only day time +minute)
next insert ...... ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------
next insert ...... ID fname new_date val_holiday unique number date_time --------------------------------------------------------------------------------------------------------------------
I have discovered what looks like a bug in the optimiser. I've posted it at https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=288243 but I wonder if any of you with SQL 2005 RTM, 2005 SP1 or 2008 CTP could confirm when this was introduced and whether it is still an issue?
Code Snippet
-- Bug report
-- 2007/07/19
-- Alasdair Cunningham-Smith
-- alasdair at acs-solutions dot co dot uk
set nocount on
go
-- example date in in British date format
set dateformat dmy
go
use tempdb
go
create table foo( bar varchar( 30 ) not null )
go
insert into foo( bar ) values ( 'fishy' )
insert into foo( bar ) values ( '19/07/2007' )
go
-- this works fine in all versions - only valid dates are passed to the convert function
select
convert( smalldatetime, bar, 103 ) as bardate
from
foo
where
bar like '__/__/____'
go
-- this works on SQL 2000, but fails on SQL 2005 SP2 (I've not tried other SPs of SQL 2005):
-- Msg 295, Level 16, State 3, Line 2
-- Conversion failed when converting character string to smalldatetime data type.
--
-- I believe the query is rewritten as if the derived table query contained
-- "and convert( smalldatetime, bar, 103 ) < getdate()"
-- which would expose the convert to the invalid data
select
*
from
(
select
convert( smalldatetime, bar, 103 ) as bardate
from
foo
where
bar like '__/__/____'
) as derived
where
bardate < getdate()
go
-- Workaround:
-- Use a case statement to protect the convert operator from the invalid data
select
*
from
(
select
case when bar like '__/__/____' then
convert( smalldatetime, bar, 103 )
else
null
end as bardate
from
foo
where
bar like '__/__/____'
) as derived
where
bardate < getdate()
go
drop table foo
go
The workaround I discovered is simple but ugly. I invite your comments...