Access One Billion Large Image Records In Ms-SQL 2005
Jan 31, 2008
hi,
I want to Display the image records of each employee of having 10 images of each.the number of records are more then one billion.......
please provid me the optimized query that can i used in SP and display in Gridview.
View 7 Replies
ADVERTISEMENT
Apr 20, 2007
I am using the 3-tiered architecture design (presentation, business laws, and data acess layers). I am stuck on how to send the image the user selects in the upload file control to the BLL and then to the DAL because the DAL does all the inserts into the database. I would like to be able to check the file type in the BLL to make sure the file being uploaded is indeed a picture. Is there a way I can send the location of the file to the BLL, check the filetype, then upload the file and have the DAL insert the image into the database? I have seen examples where people use streams to upload the file directly from their presentation layer, but I would like to keep everything seperated in the three classes if possible. I also wasn't sure what variable type the image would be in the function in the BLL that receive the image from the PL. If there are any examples or tips anyone can give me that would be appreciated.
View 2 Replies
View Related
Mar 20, 2008
HELP. I've inherited some less-than-stellar dessign.
I have a table:
tbMD
ObjectID uniqueidentifier not null
Tag nvarchar(50) not null
Data nvarchar(400) not null
This table has 1.6Billion records and has filled up the drive (nearlly 1 TB in the DB, with about 600GB in this one table).
I need to move this table to another drive with ZERO down time.
I have created a new filegroup on a second drive array and created a new table (tbMD_TEMP) and my thought was to copy all the data from tbMD into tbMD_Temp then drop tbMD, and rename tbMD_Temp.
So, anyone have any great ideas on how to get this done very quickly?
View 5 Replies
View Related
May 27, 2008
Hi All,
i have a table in MS Access with CandidateId and Image column. Image column is in OLE object format. i need to move this to SQL server 2005 with CandidateId column with integer and candidate Image column to Image datatype.
its very udgent, i need any tool to move this to SQL server 2005 or i need a code to move this table from MS Access to SQL server 2005 in C#.
please do the needfull ASAP. waiting for your reply
with regards
View 1 Replies
View Related
Dec 17, 2007
I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?
View 5 Replies
View Related
Dec 12, 2007
Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
Thanks in advance
View 1 Replies
View Related
Mar 1, 2007
hi all,
i have a large table in sql server 2005 (it has about 6 columns and 10 million records).
i need to work in a linear way on all the records (i know it sounds dumb but i need to work on all records).
now, obviously when trying to work on this table sql server get stuck for timeout or something like that...
i've noticed that a simple function like "select top 100 * from ExportTable" still works.
is there any way to have sql send me the data when it access it so that i'll still be able to proccess it on the same time, i basically work using dataset so that fixing the timeout wont be helpfull since windows probably wont allow me to load this amount of data into memory.
can any1 help?
Z
View 1 Replies
View Related
Jul 20, 2007
Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.
Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.
Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?
If so, would moving this new table into it's own database be recommended?
Thanks
Rod
View 1 Replies
View Related
May 28, 2008
Hi Friends,
I have created the procedure in sql server 2005 for retriving email addresses from table based on date_expiry and concatinating all email addresses in to @tolist as shown below
Declare @tolist varchar(8000)
set @tolist = ''SELECT @tolist = @tolist + ';' + COALESCE(email, '')
FROM awc_register
WHERE DATEDIFF(day, date_expiry, GETDATE())='3'print @tolist
set @tolist = substring(@tolist, 2, len(@tolist))
and I passed this @tolist to another procedure which should send mails for email addresses present in @tolist variable.
Problem is I need to send the mail to each email address separatley.( not bulkly)
Thanks,
View 1 Replies
View Related
Jun 20, 2006
Hi!
The DB I am working with has about 10 tables and some of the tables have 200,000 to 500,000 records. All tables have a clustered index on the primary key.
I performance during INSERT could be better I think - I add thousands of records at a time from many connections.
Is there a way to defer the update of indicies? So that, I can update the tables and then let the indicies regenerate ?
thanks,
Jas.
View 4 Replies
View Related
Feb 25, 2007
Is there any way how to create indexies after insertion of any
number of records (I dont want to create index after insertion of every record,
but for example after insertion of 1000 records) ?
I heard it should be possible with „bulk insert“ or with
transactions. Is it right ? I need do this with MS SQL Server 2005 (Workgroup
edition).
Thank you for your ideas!
Jan
View 5 Replies
View Related
Jan 29, 2008
Hi I have created one store procedure which handles global updates I am using cursor to fetch one be one row for updating (It is required for implementing business logic)Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)if then i removed unnecessary records from table (Approx -50000) it works fine,Is there any way to handle itI am calling this storeprocedure from my window service.please give me a good solution if possible
View 3 Replies
View Related
Jul 7, 2015
I have a task to count distinct records in a big table with roughly 30M records, performance is an important factor. Query is to be written to calculate weekly stats, weekly record number could be as high as 1M.
The actual result is like:
ID Policy
350235744Credit Cards
350235744PCI
350235744PCI Audit
So the final number for this particular Policy is 3
I can write the query like:
select count(distinct Incident_id) policy_name
from Reporting_DailyDlpDetail
Where (year(INSERT_DETECT_TS)=2015) and (month(INSERT_DETECT_TS) =6) and (day(INSERT_DETECT_TS) between 2 and 9)
This returns 526254 and costs 11 seconds to complete
or a query like:
Select distinct Incident_id, policy_name
from Reporting_DailyDlpDetail
Where (year(INSERT_DETECT_TS)=2015) and (month(INSERT_DETECT_TS) =6) and (day(INSERT_DETECT_TS) between 2 and 9)
This returns 749687 and costs roughly 1 minute to complete.
Result is different from the two queries, I believe the later gives correct number. How can I count the distinct based on a combo?Considering the size of data, what is the best and most efficient way to run the stats calculation against over 30 different scenarios (different policies and alert types) and not timeout?
View 9 Replies
View Related
Nov 26, 2007
Hello,
I have a VB.NET Windows Application with a SQL 2K5 back end.
The application creates a view which produces approximately 5 million records. I am attempting to transfer these records to a table programmatically, but every attempt causes a timeout. The name of the source server and view will change depending on the client's installation.
Using the standard SQL connection/command objects with the "INSERT INTO TABLE (SELECT * FROM VIEW)" doesn't work at all.
What is the best way to transfer these records programmatically?
Thanks,
-Torrwin
View 5 Replies
View Related
Nov 15, 2007
Hi,
We need to generate a report using SQL Server 2005 Reporting Services which can have 1 million plus records. So we have created a report with a simple select query. This query when run against our database returned 1110018 records.
We deployed the report onto our server and tried accessing the report through its URL. What we observed was that the report ran for about 15 minutes and then it prompted for user id and password (Windows authentication). On giving the user ID and password it continued running for another 15 minutes and at the end of it, the browser again prompted for user id and password. This cycle continued twice and at the end of it we got the message €œThe page cannot be displayed€? (i.e. the usual message displayed by the browser when it cannot load a web page). We were not able to load the report at all.
Are there any specific considerations when loading reports with such high no. of records. Any suggestion, solutions are welcome.
Thanks,
CodeKracker.
View 4 Replies
View Related
Jan 18, 2008
Hello all - I currently have a project that has a gridview on the front end. The user can select multiple items from this grid, hit a button and all of the selected records should be updated in the process. However, this resultset can have a large amount of data coming back, and I'm stumped on how to pass all of the ID's to the sp. I'd rather not call the SP for each record selected, as there could be 1,000 items selected, and well, I'd rather not call the SP 1000 times :p. I thought of generating a comma delimited list as I'm looping through the grid and using dynamic sql, but the IDs are about 6-7 numbers long, and including comma, would take up almost all of the max space in a varchar.
Are there any good solutions to this problem? Passing the items as an array? Generating a data table in .NET and passing that?
Any help would be appreciated.
-Jaime
View 4 Replies
View Related
Jan 6, 2004
I had write a ActiveX service to delete several tables and those records are more than 100000. When I test it by deleted 1000 records it is ok, but once the volum is increase until 100000, it will give me a error message said timeout operation fail.
how can i overcome this problem. please!!!!
View 8 Replies
View Related
Sep 8, 2015
I have the following scenario:
SQL database on SQL 2012
Large Production table 15 Million record
The table has 3 years of data
New monthly data is being added every month.
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,
[code].....
View 9 Replies
View Related
Oct 29, 2007
Does anyone know how to connect to image/sql database hosted in HP3000 from .net environment?
View 1 Replies
View Related
Feb 1, 2008
A friend reminded me of a problem we tried to solve a few years ago and were unsuccessful. Below is a copy of the email he sent me. We would very much appreciate any ideas from the community. Thanks!Lets start with a simple schema where you have 4
tables:
View 3 Replies
View Related
Jul 30, 2007
Hi;
I want to know, if a table has 1 billion recoreds then it will work eaisly with .net objects like datasets etc?
if not, what is the normal amount of data for sql server which is easy to manage for system.
Adnan
View 2 Replies
View Related
Mar 28, 2008
I will be receiving data from a company called Loan Performance, that has one file/table that will hold 1 billion rows. They send data by period, and I plan to load the data via BCP via NT/DOS scripts. The 1 billion rows represents data for 200+ periods.Are the following design plans feasible1. Partition table by period value, I'm not sure of the max number of partitions per table in 2005, but I think we have periods data back to 1992 and a new one gets created every month, so the possibility of having > 1000 partitions exists. I plan on just pre-creating partitions for future data, instead of dynamically creating when a new period is sent.2. Load data via BCP in DOS shell scripts that will drop index (by partition), BCP in data, and they re-create indexes by partition, is this possible ? and will I see a performance increase as opposed to one huge table (I'm pretty much sure I will). There is usually one periods data present per day, but sometimes the vendor resends all data (would get loaded on weekend).I'm a bit unsure of where to start being I never worked with this amount of data. I worked with partitioning in Oracle a long time ago.I plan on having an 2XQuadCore 2.66Ghz CPU with 32GB of RAM and SQL2005EE 64Bit connected to 1 Terabyte SAN Disk.Thanks all,PMA
View 7 Replies
View Related
Mar 28, 2015
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a:
insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
View 9 Replies
View Related
Jun 5, 2015
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
View 2 Replies
View Related
Sep 22, 2014
I am trying to use SQL to pull unique records from a large table. The table consists of people with in and out dates. Some people have duplicate entries with the same IN and OUT dates, others have duplicate IN dates but sometimes are missing an OUT date, and some don’t have an IN date but have an OUT date.
What I need to do is pull a report of all Unique Names with Unique IN and OUT dates (and not pull duplicate IN and OUT dates based on the Name).
I have tried 2 statements:
#1:
SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#))
GROUP BY tblTable1.Name, tblTable1.INDate
ORDER BY tblTable1.Name;
#2:
SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#))
UNION SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#));
Both of these work great… until I the OUT date. Once it starts to pull the outdate, it also pulls all those who have a duplicate IN date but the OUT date is missing.
Example:
NameINOUT
John Smith1/1/20141/2/2014
John Smith1/1/2014(blank)
I am very new to SQL and I am pretty sure I am missing something very simple… Is there a statement that can filter to ensure no duplicates appear on the query?
View 1 Replies
View Related
Jun 21, 2007
Hello, All:
I have many, many Access databases that are roughly 1.5GB-3GBs each and they have millions of records. Each MS Access Database file corresponds to one Database in SQL server. I'm trying to simply transform the data as it is in Access to MS SQL 2005.
I'm using the 64 bit version of Windows Server 2003 and the 64 bit version of SQL 2005. The server is running four dual core AMD Operton processors and has 8GB of RAM with a 1TB RAID 5 configuration. I think the hardware should be sufficient but the SQL Server Import and Export Wizard can't seem to handle the large number of tables/records. If I do one table at a time, it works well; however, it produces the following error message whenever I try to import the entire database:
Pre-execute (Error)
Messages
Error 0xc0202009: {5A5BF7AD-E86B-4316-AD43-1912358C56F4}: An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "Unspecified error".
(SQL Server Import and Export Wizard)
Error 0xc020801c: Data Flow Task: The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0202009.
(SQL Server Import and Export Wizard)
Error 0xc004701a: Data Flow Task: component "Source 64 - District Corporal Punishment Class" (5743) failed the pre-execute phase and returned error code 0xC020801C.
(SQL Server Import and Export Wizard)
Any ideas would be much appreciated!
Thank you,
Cody
View 1 Replies
View Related
Jan 12, 2004
Hi,
My client has a rather large database with some very large reports. Some of the reports have around 20 sub-reports a piece. We have decided to move the client's application to a .NET web application and would migrate them to SQL Server 2000.
The only problem is now, designing the reports. I have tried doing what Microsoft says (converting to stored procedures and views) but I keep getting syntax errors on the SQL side of things when I cut and paste.
For example, the following code is taken from Access :
SELECT tblProjects.fldCountry, tblProjects.fldDescription, tblOrganizations.fldAcronym, tblProjects.fldProjID, Max(tblProjYears.fldStartDate) AS MaxOffldStartDate, Max(tblProjYears.fldEndDate) AS MaxOffldEndDate, qryProjLocsWithFEData.fldProjPeriodID
FROM (tblProjects INNER JOIN tblOrganizations ON tblProjects.fldOrgID = tblOrganizations.fldOrgID) INNER JOIN (tblProjYears INNER JOIN qryProjLocsWithFEData ON tblProjYears.fldProjPeriodID = qryProjLocsWithFEData.fldProjPeriodID) ON tblProjects.fldProjID = tblProjYears.fldProjID
GROUP BY tblProjects.fldCountry, tblProjects.fldDescription, tblOrganizations.fldAcronym, tblProjects.fldProjID, qryProjLocsWithFEData.fldProjPeriodID
ORDER BY tblProjects.fldCountry, tblOrganizations.fldAcronym, tblProjects.fldProjID;
But when I try that in SQL Query Analyzer i get the error : The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator.
I'm pretty sure it's on the tblProjects.fldDescription Group By, but if I leave it out, it still throws an error. Anybody have any ideas?
Thanks
View 5 Replies
View Related
Aug 19, 1999
After moving our database between servers, one table had a hiccup in it's
identity column. The number jumped from 761 to 1886863475 on the next
insert. This is a production server and I didn't catch it until yesterday.
So I have several days worth of numbers in a central table. I can't clean
them out, but I wondered if I could do the following:
DBCC CHECKIDENT ('table_name', RESEED, 800)
I tested it on our development server and it doesn't seem to cause any
problems. I know that when we get to 1.8 billion on the identity column
again we'll get an error, but I'm willing to risk it.
A few questions. 1) Am I insane for trying this? 2) Has anyone else ran into
a similar problem and what did you do to fix it? 3) If I do this, is there
anything in particular to watch out for? 4) What caused the jump in the
first place?
My other option is to change the datatype from int to decimal(19) or
something along those lines. This will upset our programmers and I'll have
to change all my foreign keys (not a big deal, just a pain).
Any input at all here would be appreciated.
Thanks,
Grant
View 1 Replies
View Related
Feb 29, 2012
So I just got an email from Production Support saying an hour and a half downtime is unacceptable to move a half billion rows between 2 partitions because I am moving a clustered index and space is a consideration.
I can not use partition switching because the clustered index is changing.
This is what I am doing...
1. I am creating a new table with the new cluster on a new partiton
2. I am moving the records in 5K set based batches by doing a range search on the existing clustered index on the existing table.
3. I then reapply all of the nonclustered index from the original table to the new one.
4. I do a sp_rename swap out.
The same way I have done this many times before. Is there some new secret special sauce (other than partition switching) I can use?
View 14 Replies
View Related
Mar 30, 2007
Hi everyone. I hope that my question isn't too broad, but here it goes...
I am trying to figure out the best way to scale a SQL Server database so that it can handle a billion simultaneous users querying the same tables, and can easily scale to handle many billions of simultaneous users. The database must also have 999.99% availability. The number of licences and amount of hardware needed is not an issue.
Thanks in advance.
View 15 Replies
View Related
Aug 5, 2014
It was an interview question, what's the best way to update a table with 10 billion rows.
View 16 Replies
View Related
Sep 1, 2006
I have an interesting challenge. we are not allowed to allow users direct access to data in our SQL Server. Audit requires us to take the data out of our production server and pass it to the user. my situation is i have a table in SQl with over 100,000 records and growing. i want to pass that to an access data base. i am utilizing DTS and a data transform.
i s there a better way? the package is running slow and even appears to freeze. i see this amount of data as growing as well.
Don S
View 1 Replies
View Related
Jun 1, 2006
Hi ,
How to INsert and Update Large Amount of Records (4 Lacs) into Destination Table Through Business Intelligence Studio Using SSIS Pacakge .How to Achieve this .i tryed with left outer join & conditional split but the problem its not able to insert & update records simultaneously . can any one give me a sample .
Thanks & Regards
Jeyakumar.M
View 3 Replies
View Related