How To Work With Huge Amount Of Records In A Table Using MSSQL Server 2000?
Dec 21, 2005
In
one of our forth coming projects, with ASP.Net/C#/MSSQL Server, We have
to deal with a Business table having about 15 millions of records. We
want to know, that which methodologies should we adopt, both regarding
front end and back end perspective, so the site could give optimised
performance. Also in place of a Dedicated Server, the Hosting Company
provides MSDE (that come with .net). Will this create any problem with
this project, that have such a huge table? Should we go for some
advanced database technique, such as, Clustering, Spliting Tables, etc.
Followings are the fields that the business table contains:
ID, Category ID (which comes from a Category table, each business is
under a category), BusinessName, SignupDate, Address1, Address2, Phone
Number,
Hours Of Operation, Years in Business, LicenseNumber, DiscountCoupon, Website
View 3 Replies
ADVERTISEMENT
Sep 28, 2006
Hi All,
I used a data flow task, and when trying to transfer data from a OLE DB Source (records ~ 75 lac) to a destination OLE DB Source, SSIS fails at the middle giving an error saying the Transaction log got filled, try again after clearing the same.
My query is what is the most efficient way to transfer say records more than 50 lac ensuring that it doesn't fail in the middle?
Thanks in Advance,
Mithun.
View 5 Replies
View Related
Nov 20, 2006
Hi,I've an application, lets call it simply "A", which creates in a Microsoft Sql Database two huge tables.Lets call them "table1" and "table2"It safes really much data into this tables.After application "A" has finished another application is executed which deletes this two tables.Then application "A" is started again and it will create this two tables again, but the amount of data becomes bigger.It can only proceed if the tables were deleted completely before and the database is empty. This is the procedure which I repeat very often, but everytime the amount of data becomes bigger (table1 and table2 becomes bigger).A couple if times it works fine, but once it seems data becomes too big and application "A" fails. Mostlikely because the data wasnt removed correctly / completely. This is my code of deleting the two tables, maybee there is something I have to change:</p><p> try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder("Server=mycomputerdbname;Integrated Security=SSPI;" + "Initial Catalog=testing"); builder["Server"] = "(local)dbname"; builder["Connect Timeout"] = 10; builder["Trusted_Connection"] = true; builder["Initial Catalog"] = ((ComponentConfiguration)this.componentConfig).Persistency.DatabaseName; SqlConnection sqlconnection = new SqlConnection(); sqlconnection.ConnectionString = builder.ConnectionString; sqlconnection.Open(); SqlCommand cmd1 = new SqlCommand("DROP TABLE table1"); // TO Do delete all tables SqlCommand cmd2 = new SqlCommand("DROP TABLE table2"); // TO Do delete all tables cmd1.Connection = sqlconnection; cmd2.Connection = sqlconnection; cmd1.ExecuteNonQuery(); Thread.Sleep(7000); cmd2.ExecuteNonQuery(); Thread.Sleep(7000); sqlconnection.Close(); Thread.Sleep(3000); } catch { }</p><p> </p><p> Thanks for help! mulata
View 6 Replies
View Related
May 30, 2008
Hi Everyone,
We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.
In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.
Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?
FYI: The Recovery model of this database is Simple
Regards,
Jdang
View 9 Replies
View Related
Jun 20, 2007
I have a table with 35,000 records in it. I want to update a value in column A for only the first 5000 records, leaving the value in Column A for the remaining 30,000 records as it is now. What would be the command I would use to update Column A for the first 5000 records.
Thanks,
View 4 Replies
View Related
Feb 1, 2008
A friend reminded me of a problem we tried to solve a few years ago and were unsuccessful. Below is a copy of the email he sent me. We would very much appreciate any ideas from the community. Thanks!Lets start with a simple schema where you have 4
tables:
View 3 Replies
View Related
Apr 24, 2014
I have table A (EmployeeNumber, Grouping, Stages)
and
Table B (Grouping, Stages)
Table A could look like the following where the multiple employees could have multiple types and multiple stages.
EmployeeNumber, Type, Stages
100, 1, Stage1
100, 1, Stage2
100, 2, Stage1
100, 2, Stage2
200, 1, Stage1
200, 2, Stage2
Table B is a list of requirements that each employee must have. So every employee must have a type 1 and 2 and the associated stages listed below.
Type, Stage
1, Stage1
1, Stage2
2, Stage1
2, Stage2
2, Stage3
2, Stage4
So I know that each employee should have 2 Type 1's and 4 Type 2's. I hope that makes sense, I'm trying to change my data because ours is very proprietary.
I need to identify employees who do not have all their stages and list the stages they are missing. The final report should only have employees and the associated missing types and stages.
I do a count by employee to see how many types they have to identify the ones that don't have all the types and stages.
My count would look something like this:
EmployeeNumber Type Total
100, 1, 2
100, 2, 2
200, 1, 1
200 1, 2
So I know that employee 100 should have 2 more Type 2's and employee 200 should have 1 more Type 1 and 2 more Type 2's based on the required list.
The problem I'm having is taking that required list and joining to my list of employees with missing data and pulling from it the types and stages that are missing by employee. I thought I could get a list of the employees that are missing information and right join it to the required list where the missing records would be nulls. But, that doesn't work because some employees do have the required information and so I'm not getting any nulls returned.
View 9 Replies
View Related
Dec 26, 2006
Hi,
Has anyone worked on a system that needs per day around 1 Million writes, almost that many reads and deletes once every 1-2h. File sizes range from 2KB-1MB only that 85% of the files are in the 1-8KB range.
This is all done in a single table with a GUID key.
If yes could you tell me what problems did you encounter, any suggestions etc.
Thank you in advance
View 14 Replies
View Related
Nov 2, 2006
Hello
Im searching for a solution to set all matrix row or cell the same height.
it schoud looks like this example:
This is a simple matrix
test a
text b
text c
text d
text e
text f
text g
This is a matrix with all the same row-height.
test a
text b
.
text c
.
.
text d
text e
text f
text g
.
.
Thx you a lot
View 3 Replies
View Related
Oct 22, 2015
I am copying data from database to an excel file through SSIS. database is MS SQL 2005 and BIDS is also 2005.However, the job doing this task fails every time.As per investigation, the result of the query is more than 100,000 rows and we know that excel has a limit of 65000 rows of data.Is there a way of setting the limit up? or something? a better approach maybe so that everything will be copied to the excel file successfully.
The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error Error: 2015-10-22 04:34:58.05 Code: 0xC0047021
Source: Data Flow Task
Description: SSIS Error Code DTS_E_THREADFAILED.
Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:30:00 AM Finished: 4:35:05 AM Elapsed: 304.844 seconds. The package execution failed. The step failed. "
View 1 Replies
View Related
Sep 6, 2006
Does enabling/disabling Data Execution Prevention have a performanceimpact on SQL 2000 or SQL 2005?For SQL best performance - how should I configure for:Processor Scheduling:Programs or Background servicesMemory Usage:Programs or System Cache
View 9 Replies
View Related
Feb 1, 2005
I am upgrading a .mdb to MSSQL. The .mdb is 17MB, but the resulting MSSQL is 72MB. Tried using both the Access Upsizing Wizard and Enterprise Manager DTS. I have done this a number of times before, but never ran into this problem. Any ideas what coule be going on, and how to fix it?
View 2 Replies
View Related
Feb 1, 2005
I am upgrading a .mdb to MSSQL. The .mdb is 17MB, but the resulting MSSQL is 72MB. Tried using both the Access Upsizing Wizard and Enterprise Manager DTS. I have done this a number of times before, but never ran into this problem. Any ideas what coule be going on, and how to fix it?
View 1 Replies
View Related
Oct 14, 2005
I have one table with 300,000 records and 30 columns.For example columns are ID, COMPANY, PhONE, NOTES ...ID - nvarchar lenth-9COMPANY - nvarchar lenth-30NOTES - nvarchar length-250Select * from databasewhere NOTES like '%something%'Is there a way to get results from this query in less then 1-2 secondand how?
View 7 Replies
View Related
Apr 3, 2000
SQL 7 SP1 NT4 SP5
I have a TRANSACTION table with 150 million rows.
I have a USER table.
Each user has about 600 records in the TRANSACTION table.
The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.
The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.
I want to go through the USER table and delete all users who have not visited me in a while.
I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.
The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.
Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?
Thanks !!!
View 3 Replies
View Related
Apr 7, 2006
How can I copy a table from my sql server 2000 db to my sql server 2005 express edition?
I have a project in VS.NET 2005 and I have a db in App_Data folder. However, when I look into that folder, there is nothing visible. I now need to copy a table from my existing sql server 2000 to my db located in my project's App_Data folder.
Any help would be appreciated..
Regards,
View 1 Replies
View Related
Jul 20, 2005
Hi,I have an Access application with linked tables via ODBC to MSSQLserver 2000.Having a weird problem, probably something i've done while not beingaware of (kinda newbie).the last 20 records (and growing)of a specific table are locked - cantchange them - ("another user is editing these records ... ").I know for a fact that no one is editing records and yet no user canedit these last records in the MDB - including the administrator -while able to add new records.Administrator able to edit records in the ADP (mssql server) where thetables are stored.Please help, the application is renedred inert .Thanks for reading,Oren.
View 3 Replies
View Related
Jul 20, 2005
Hi, guys.A very simple question for all of you: how can I get the amount ofbytes exchanged during a Merge replication between two Microsoft SQL2000 servers?Thank you.Bye,Angelo.-
View 1 Replies
View Related
Jan 7, 2007
Environment:Server1 (Local)OS Windows 2000 ServerSQL Server 2000Server2 (Remote)OS Windows 2003 ServerSQL Server 2000(Both with most recent service packs)Using Enterprise Manager, we have set up the Link Server (LINK_A) inthe Local Server 1 to connect to Server 2.The SQL we need to run is the following:INSERT INTO table1(column1,column2)SELECT A.column1, A.column2FROM LINK_A.catalog_name.dbo.table2 AS AWHERE A.column1 xxxx;When we run this from the Query Analyzer, it completes with no problemsin a few seconds.Our problem:When we add the DTS Package as the ActiveX Script (VB Script) to theLocal Package, it times out at "obj_Conn.Execute str_Sql"Dim Sql, obj_ConnSet obj_Conn = CreateObject("ADODB.Connection")obj_Conn.Open XXXXobj_Conn.BeginTransstr_Sql = "INSERT INTO table1("str_Sql = str_Sql & "column1"str_Sql = str_Sql & ", column2"str_Sql = str_Sql & ")"str_Sql = str_Sql & " SELECT A.column1"str_Sql = str_Sql & ", A.column2"str_Sql = str_Sql & " FROM LINK_A.catalog_name.dbo.table2 AS A"str_Sql = str_Sql & " WHERE A.column1 0"str_Sql = str_Sql & ";"obj_Conn.Execute str_Sql----------------------------------------------------------When we make a Stored Procedure and run the following SQL, it freezes.INSERT INTO table1(column1,column2)SELECT A.column1, A.column2FROM LINK_A.catalog_name.dbo.table2 AS AWHERE A.column1 xxxxWe've also tried the following with the same results;INSERT INTO table1(column1,column2)SELECT A.column1, A.column2FROM [LINK_A].[catalog_name].[dbo].[table2] AS AWHERE A.column1 xxxxThe same thing happens when we try to run the "SELECT" by itself.SELECT TOP 1 @test=A.column1FROM LINK_A.catalog_name.dbo.table2 AS AWHERE A.column1 xxxxORDER BY A.column1What is going wrong here, and how do we need to change this so that itruns without timing out or freezing?
View 2 Replies
View Related
Jul 12, 2007
Hey Guys
I have meet the same problem, too.
I create a table with 1,380,000 rows data,
the db real size about 114 MB.
The primary key size is nchar(6).
When I use RDA pull, I found that the primary
key in the PDA disappear. So, It took a long time
to get query response.
But when I delete some rows to 680,000 rows of data.
After I pull, The primary key can pull from the SQL Server.
PS: I didn't change any code. Just delete some rows.
Is that SQL-Mobile's bug??
PS: 1.Database and Temp Database limitation both are 384MB
2.If I use query analyzer to add primary key it works! so strange!!
3.Pull process return "S_OK".
4.After Pull process finished, the db connection still alive. It seems not like
time out problem.
5.Local Connection String:"Data Source='%s\%s';SSCEatabase Password='%s';SSCE:Encrypt Database='true';SSCE:Max Database Size=384;SSCE:Temp File Max size=384;SSCE:Temp File Directory=%s"
View 1 Replies
View Related
May 19, 2006
My company would like to start keeping track of everytime a table, stored procedure, or function is changed.
What is the best way to do this?
View 4 Replies
View Related
May 23, 2008
let's say I have this one, no problem:
Code Snippet
CREATE PROCEDURE au_info
@var text
AS
DECLARE @hDoc int
EXEC sp_xml_preparedocument @hDoc OUTPUT,
@var
PRINT CAST(@var as varchar(8000))
-- Use OPENXML to provide rowset consisting of customer data.
--INSERT Customers
SELECT *
FROM OPENXML(@hDoc, N'/ROOT/Customers')
-- WITH Customers
GO
Code Snippet
EXEC au_info @var = '
<ROOT>
<Customers CustomerID="XYZAA" ContactName="Joe"
CompanyName="Company1">
<Orders CustomerID="XYZAA"
OrderDate="2000-08-25T00:00:00"/>
<Orders CustomerID="XYZAA"
OrderDate="2000-10-03T00:00:00"/>
</Customers>
<Customers CustomerID="XYZBB" ContactName="Steve"
CompanyName="Company2">No Orders yet!
</Customers>
</ROOT>'
but what if I only have the xml text in a temporary table:
Code Snippet
create table #t1 (col1 text)
insert into #t1
Values('<ROOT>
<Customers CustomerID="XYZAA" ContactName="Joe"
CompanyName="Company1">
<Orders CustomerID="XYZAA"
OrderDate="2000-08-25T00:00:00"/>
<Orders CustomerID="XYZAA"
OrderDate="2000-10-03T00:00:00"/>
</Customers>
<Customers CustomerID="XYZBB" ContactName="Steve"
CompanyName="Company2">No Orders yet!
</Customers>
</ROOT>')
if i do this:
Code SnippetEXEC au_info @var = (Select col1 from #t1)
Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near '('.
Code SnippetEXEC au_info (Select col1 from #t1)
Server: Msg 201, Level 16, State 3, Procedure au_info, Line 0
Procedure 'au_info' expects parameter '@var', which was not supplied.
View 9 Replies
View Related
Jul 20, 2005
Hi All,I've the following table with a PK defined on an IDENTITY column(INSERT_SEQ):CREATE TABLE MYDATA (MID NUMERIC(19,0) NOT NULL,MYVALUE FLOAT NOT NULL,TIMEKEY INTEGER NOT NULL,TIMEKEY_DTTM DATETIME NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) IDENTITY(1,1) NOT NULL)GOALTER TABLE MYDATAADD CONSTRAINT PK_MYDATAPRIMARY KEY (INSERT_SEQ)GOThe TIMEKEY_DTTM field is generated, from the value actually insertedinto theTIMEKEY field, by the following trigger:CREATE TRIGGER TIMEKEY1ON MYDATAFOR INSERT ASBEGINDECLARE @M_TIMEKEY_DTTM DATETIMESELECT @M_TIMEKEY_DTTM = DATEADD(SECOND, INS.TIMEKEY +EP.GMT_OFFSET * 0 ,'1970-01-01 00:00:00.000')FROM INSERTED INS, LOCATIONINFO EPWHERE INS.EID = EP.EIDUPDATE MYDATASET TIMEKEY_DTTM = @M_TIMEKEY_DTTMFROM INSERTED INS, MYDATA MDWHERE MD.INSERT_SEQ = INS.INSERT_SEQENDGOThere is also a composite, non unique, index defined on thetuple:(MID,IID,TIMEKEY,EID)CREATE INDEX IX_METDATA ON MYDATA (MID,IID,TIMEKEY,EID)GOAs a consequence of an application design change, I would also changethis index to be UNIQUE, but when I try to drop and create it I get anerror, because the tables stores some duplicated rows...In order to succesfully upgrade the index definition, I wrote some DMLstaementsto lookup and remove the duplicated rows, keeping only the firstrecord inserted, i.e. the one with the lowest INSERT_SEQ:---- This table stores then umber of duplicated records eventuallydiscovered-- into the MYDATA table; the initial value for the NUM_DUPLICATESfield is-- 0 (no duplicated record)--DROP TABLE DUPLICATESGOCREATE TABLE DUPLICATES (TABLENAME VARCHAR(17),NUM_DUPLICATES NUMERIC(19,0) )GOINSERT INTO DUPLICATES VALUES ('MYDATA',0)GOINSERT INTO DUPLICATES VALUES ('CATEGORIESDATA',0)GO---- ///////// CLEAN UP OF MYDATA TABLE--DROP TABLE TMP_MYDATAGOCREATE TABLE TMP_MYDATA (MID NUMERIC(19,0) NOT NULL,TIMEKEY INTEGER NOT NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) )GO---- Insert into the TMP_MYDATA table all the duplicated records for-- the tuple (MID,IID,TIMEKEY,EID) and NULL for the INSERT_SEQ field--INSERT INTO TMP_MYDATA (MID,IID,TIMEKEY,EID)SELECT MID,IID,TIMEKEY,EIDFROM MYDATAGROUP BY MID,IID,TIMEKEY,EIDHAVING COUNT(*)>1GO---- Updates the INSERT_SEQ field to the lowest value in the group-- of duplicated records--UPDATE TMP_MYDATASET TMP_MYDATA.INSERT_SEQ = (SELECT MIN(INSERT_SEQ)FROM MYDATAWHERE TMP_MYDATA.MID = MYDATA.MID ANDTMP_MYDATA.IID = MYDATA.IID ANDTMP_MYDATA.TIMEKEY = MYDATA.TIMEKEY ANDTMP_MYDATA.EID = MYDATA.EID )GO---- Updates the value of NUM_DUPLICATES for the MYDATA table.--UPDATE DUPLICATESSET NUM_DUPLICATES = (SELECT COUNT(*) FROM TMP_MYDATA)WHERE TABLENAME = 'MYDATA'GO---- Delete from the MYDATA table all the duplicated records,-- keeping only the row with the lowest INSERT_SEQ-- The delete is performed only if there are duplicated recors;-- this is achieved using a "short circuit" AND on the number ofrecords-- stored into the NUM_DUPLICATES field of the DUPLICATES table for-- the MYDATA table...--DELETE FROM MYDATAWHERE ( SELECT NUM_DUPLICATES FROM DUPLICATES WHERE TABLENAME ='MYDATA') > 0 ANDEXISTS ( SELECT 1FROM TMP_MYDATAWHERE MYDATA.MID = TMP_MYDATA.MID ANDMYDATA.IID = TMP_MYDATA.IID ANDMYDATA.TIMEKEY = TMP_MYDATA.TIMEKEY ANDMYDATA.EID = TMP_MYDATA.EID ANDMYDATA.INSERT_SEQ > TMP_MYDATA.INSERT_SEQ )GOThis tecnique works fine on a normal table (1M recs) but is not veryperformanton huge tables (>10M records)!Do you know a better way to achieve the task of removing all theduplicates records, preserving the lowest INSERT_SEQ betwee theduplicates and also preserving the sequence seed, so that a new recordinserted at time t1>t0 is enumerated with an INSERT_SEQ|t1 >max(INSERT_SEQ)|t0 ?Thanks a lot for your help!PatrizioPS. sorry for such a large post!
View 1 Replies
View Related
Jul 6, 2015
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2));
;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0)
insert @amts
select lineID, sum(rc), sum(copay), sum(deduct),
case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end
from claimln
where status is null
and lineID not in (select claimID from unpaid)
group by lineID
it's like there's some massively recursive process going on?
View 5 Replies
View Related
Feb 25, 2007
Is there any way how to create indexies after insertion of any
number of records (I dont want to create index after insertion of every record,
but for example after insertion of 1000 records) ?
I heard it should be possible with „bulk insert“ or with
transactions. Is it right ? I need do this with MS SQL Server 2005 (Workgroup
edition).
Thank you for your ideas!
Jan
View 5 Replies
View Related
Jan 18, 2008
Hello all - I currently have a project that has a gridview on the front end. The user can select multiple items from this grid, hit a button and all of the selected records should be updated in the process. However, this resultset can have a large amount of data coming back, and I'm stumped on how to pass all of the ID's to the sp. I'd rather not call the SP for each record selected, as there could be 1,000 items selected, and well, I'd rather not call the SP 1000 times :p. I thought of generating a comma delimited list as I'm looping through the grid and using dynamic sql, but the IDs are about 6-7 numbers long, and including comma, would take up almost all of the max space in a varchar.
Are there any good solutions to this problem? Passing the items as an array? Generating a data table in .NET and passing that?
Any help would be appreciated.
-Jaime
View 4 Replies
View Related
May 22, 2008
Does abyone know how to compare data-type xml in a temp/variable/physical table in MSSQL 2000?
I tried this works in MSSQL 2005,
Code Snippet
create Table #t1 ([c1] int identity(1,1) not null, [c2] text)
create Table #t2 ([c1] int identity(1,1) not null, [c2] text)
Insert into #t1
Values('This is a test')
Insert into #t2
Values('This is a test')
Select * from #t1
Select * from #t2
Select * from #t1 where [c2] LIKE (Select [c2] from #t2)
drop table #t1
drop table #t2
but not MSSQL 2000.
Server: Msg 279, Level 16, State 3, Line 12
The text, ntext, and image data types are invalid in this subquery or aggregate expression.
Is this true (from BOL)?
Code SnippetIn comparing these column values, if any of the columns to be compared are of type text, ntext, or image, FOR XML assumes that values are different (although they may be the same because Microsoft® SQL Server„¢ 2000 does not support comparing large objects); and elements are added to the result for each row selected.
View 1 Replies
View Related
Jul 20, 2005
I have a combo box where users select the customer name and can eithergo to the customer's info or open a list of the customer's orders.The RowSource for the combo box was a simple pass-through query:SELECT DISTINCT [Customer ID], [Company Name], [contact name],City,Region FROM Customers ORDER BY Customers.[Company Name];This was working fine until a couple of weeks ago. Now wheneversomeone has the form open, this statement locks the entire Customerstable.I thought a pass-through query was read-only, so how does this do atable lock?I changed the code to an unbound rowsource that asks for input of thefirst few characters first, then uses this SQL statement as therowsource:SELECT [Customer ID], [Company Name], [contact name],City, Region Fromdbo_Customers WHERE [Company Name] like '" & txtInput & "*' ORDER BY[Company Name];This helps, but if someone types only one letter, it could still bepulling a few thousand records and cause a table lock.What is the best way to populate a large combo box? I have too muchdata for the ADODB recordset to use the .AddItem methodI was trying to figure out how to use an ADODB connection, so that Ican make it read-only to eliminate the locking, but I'm striking outon my own.Any ideas would be appreciated.Roy(Using Access 2003 MDB with SQL Server 2000 back end)
View 2 Replies
View Related
Aug 11, 2015
Table1 contains fields Groupid, UserName,Category, Dimension
Table2 contains fields Group, Name,Category, Dimension (Group and Name are not in Table1)
So basically I need to read the records in Table1 using Groupid and each time there is a Groupid then select records from Table2 where Table2.Category in (Select Catergory from Table1)
and Table2.Dimension in (Select Dimension from Table1)
In Table1 There might be 10 Groupid records all of which are different.
View 9 Replies
View Related
Mar 24, 2008
Hi,
I am a bit new to the MSSQL server. In our application, we use so many SQL queries. To imporve the performance, we used the Database enigine Tuning tool to create the indexes. The older version of the application supports MSSQL 2000 also. To re-create these new indexes, I have an issue in running these "CREATE INDEX" commands as the statements generated for index creation are done in MSSQL 2005. The statements include "INCLUDES" keyword which is supported in MSSQL 2005 but not in MSSQL 2000.
Ex:-
CREATE INDEX IND_001_PPM_PA ON PPM_PROCESS_ACTIVITY
(ACTIVITY_NAME ASC, PROCESS_NAME ASC, START_TIME ASC, ISMONITORED ASC)
INCLUDE
(INSTANCE_ID, ACTIVITY_TYPE, STATUS, END_TIME, ORGANIZATION);
Any help in creating such indexes in 2000 version is welcome.
Thanks,
Suresh.
View 2 Replies
View Related
May 3, 2008
Hello
We are using SQL 2005 and now we are planning to use SQL 2000. what are the ways to do the process.
We taken the script spcificall for 2000 and run it in SQL 200. But we are getting the error in SCRIPT?
Could you please give me the step to do?
Thanks,
Sankar R
View 6 Replies
View Related
Jun 15, 2006
Ben writes "I have a sql script that doesn't function very well when it's executed on a SQL 2000 server.
The scrpt looks like this:
---------------------------------------------------------------------------------------------------
USE [master]
GO
IF NOT EXISTS (SELECT * FROM master.dbo.syslogins WHERE loginname = N'SSDBUSERNAME')
EXEC sp_addlogin N'SSDBUSERNAME', N'SSDBPASSWORD'
GO
GRANT ADMINISTER BULK OPERATIONS TO [SSDBUSERNAME]
GO
GRANT AUTHENTICATE SERVER TO [SSDBUSERNAME]
GO
GRANT CONNECT SQL TO [SSDBUSERNAME]
GO
GRANT CONTROL SERVER TO [SSDBUSERNAME]
GO
GRANT CREATE ANY DATABASE TO [SSDBUSERNAME]
GO
USE [master]
GO
If EXISTS (Select * FROM master.dbo.syslogins WHERE loginname = N'SSDBUSERNAME')
ALTER LOGIN [SSDBUSERNAME] WITH PASSWORD=N'SSDBPASSWORD'
GO
GRANT ADMINISTER BULK OPERATIONS TO [SSDBUSERNAME]
GO
GRANT AUTHENTICATE SERVER TO [SSDBUSERNAME]
GO
GRANT CONNECT SQL TO [SSDBUSERNAME]
GO
GRANT CONTROL SERVER TO [SSDBUSERNAME]
GO
GRANT CREATE ANY DATABASE TO [SSDBUSERNAME]
GO
USE [master]
GO
IF EXISTS (select * from dbo.sysdatabases where name = 'ISIZ')
DROP DATABASE [ISIZ]
GO
USE [SurveyData]
GO
exec sp_adduser 'SSDBUSERNAME'
GRANT INSERT, UPDATE, SELECT, DELETE
TO SSDBUSERNAME
GO
USE [SurveyManagement]
GO
exec sp_adduser 'SSDBUSERNAME'
GRANT INSERT, UPDATE, SELECT, DELETE
TO SSDBUSERNAME
---------------------------------------------------------------
I need to be converted to a script that can be executed on both MSSQL 2000 and MSSQL 2005.
I was wondering if somebody there could help me with this problem?!
Thanks,
Ben"
View 1 Replies
View Related
Nov 17, 2007
I've been tasked to move our production databases on MSSQL 2000 to 2005. I've supported MSSQL since version 6.5 and performed migrations to successor versions.
Current Environment is MSSQL 2000 32-bit with current Service Packs.
I've performed mock migrations on Test servers upgrading all Production instances simultaneously from MSSQL 2000 to 2005 32-bit. The Test environment is identical to Production minus server name, IP etc. Also I have a separate server with MSSQL 2005 installed where I use the DETACH / ATTACH and BACKUP / RESTORE method for migration / acceptance testing. There are approximately 30 databases totaling 70 GB. This has gone as expected and fairly successful. Vendors have been coordinated with to update code and staff for acceptance testing.
I'd prefer going directly to MSSQL 2005 64-bit instead if possible due to memory benefits etc. This is where I'd like some feedback prior to borrowing a 64-bit server for testing.
Upgrade options:
1. Is it better to migrate from MSSQL 2000 32-bit to 2005 64-bit via:
a. DETACH / ATTACH
b. BACKUP / RESTORE
c. Is one method more advantageous relating to the end result?
2. Regarding XP clients, have issues been experienced with the default SQL Server driver or is an alternate recommended for XP clients to connect to a MSSQL 64-bit server databases?
3. If you have performed this migration and have relevant experience please pass them along.
View 3 Replies
View Related