Error.log Too Large
Jan 10, 2007this file is getting upto 110GIG why?
View 4 Repliesthis file is getting upto 110GIG why?
View 4 RepliesFirst off I understand that it is a horrible idea to run extremely large/long running reports, but sometimes it ends up being the best possible solution due to external forces.
I've got a 25,000 page report that we recently converted from crystal reports to SSRS. The SSRS server is a 64bit 2003 server with 32 gigs of ram running SSRS 2005. When running the report through the report manager web application, it renders in the browser/viewer after about 12 minutes. Exporting to pdf through the browser/viewer in the report manager takes an additional 55 minutes. It does work and it produces a whopping 1.03gb pdf.
Unfortunately, I've run into a problem when trying to do this from a console application using the SSRS client API. After about 30-35 minutes I get an exception on the client with the following error:
Exception Message: The underlying connection was closed: An unexpected error occurred on a receive.
InnerException = Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Here is the api call:
Code Snippet
byte[] m_data = reportingService.Render(this.ReportPath, this.ExportFormat, null,
deviceInfo, selectedParameters, null, null, out encoding, out m_mimeType,
out usedParameters, out warnings, out streamIds);
Here are some things I've tried so far:
set the HttpRuntime ExecutionTimeout value to 3 hours on the report server
disabled http keep alives on the report server
increased the script timeout on the report server
set the report to never time out on the server
set the report timeout to several hours on the client call
Disabled antivirus on the client side, and verified there was no antivirus running on the reporting server.
Tried using default credentials in the ReportingService object as opposed to supplying credentailsAny ideas would be appreciated. I understand the best solution is to split the report up into smaller reports, which is the backup option, but being able to keep it as one report is the goal.
I am getting the follwoing error when running my package.
An OLE DB record is available. Source: "Microsoft OLE DB Provider for Visual FoxPro" Hresult: 0x80004005 Description: "File c:docume~1sortizlocals~1 emp 0003589001o.tmp is too large.".
I use a select statement to join to large foxpro tables. The data contains over a million records.
Any help is greatly appreciated.
Thanks
I have an tab delimiter ed file that I'm trying to load into a database using SSIS. The the database have a column called Comments that can hold up to 1000 Unicode characters (nvarchar[1000])
I have appropriately defined the flat file connection and marked every field to the intended length, but every time I run it it will give me the following error:
Data conversion failed. The data conversion for column "COMMENTS" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.
All the columns have a match, this specific column is has no field longer than 1000, larger record has 528 characters in it and the fields are defined as Unicode string in the file connection.
I already ran out of ideas of what may be giving this error, anyone has an idea of what else to try?
Hi I have created one store procedure which handles global updates I am using cursor to fetch one be one row for updating (It is required for implementing business logic)Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)if then i removed unnecessary records from table (Approx -50000) it works fine,Is there any way to handle itI am calling this storeprocedure from my window service.please give me a good solution if possible
View 3 Replies View RelatedHi All,
I'm trying to transfer data from DB2 Database to SQL Server 2005.
Well, i used the OLE DB Source, the Data Conversion Component and the OLE DB Destination component.
I have five Data flows with this configuration above. But I am receiving an error message from one of them.
Please check below the error message:
"[Source Table TARTRATE [1]] Error: The value was too large to fit in the output column "ADJ_RATE_PCT" (60). "
"[Source Table TARTRATE [1]] Error: The "component "Source Table TARTRATE" (1)" failed because error code 0xC02090F8 occurred, and the error row disposition on "output column "ADJ_RATE_PCT" (60)" specifies failure on error. An error occurred on the specified object of the specified component."
Could you please help me with this issue?
Thanks in advance.
Thiago
Hi ,
I am trying to deploy the report model which is more than 4 mb size. I am getting following error:
Error 2 There was an exception running the extensions specified in the config file. ---> Maximum request length exceeded.
i changed the web.config on the server side:
as following:
<!--
<httpRuntime
executionTimeout = "600" [in Seconds][number]
maxRequestLength = "102400" [number]
requestLengthDiskThreshold = "80" [number]
useFullyQualifiedRedirectUrl = "false" [true|false]
minFreeThreads = "8" [number]
minLocalRequestFreeThreads = "4" [number]
appRequestQueueLimit = "5000" [number]
enableKernelOutputCache = "true" [true|false]
enableVersionHeader = "true" [true|false]
apartmentThreading = "false" [true|false]
requireRootedSaveAsPath = "true" [true|false]
enable = "true" [true|false]
sendCacheControlHeader = "true" [true|false]
shutdownTimeout = "90" [in Seconds][number]
delayNotificationTimeout = "5" [in Seconds][number]
waitChangeNotification = "0" [number]
maxWaitChangeNotification = "0" [number]
enableHeaderChecking = "true" [true|false]
/>
-->
<httpRuntime executionTimeout="600" maxRequestLength="102400" requestLengthDiskThreshold="80" useFullyQualifiedRedirectUrl="false" minFreeThreads="8" minLocalRequestFreeThreads="4" appRequestQueueLimit="5000" enableKernelOutputCache="true" enableVersionHeader="true" requireRootedSaveAsPath="true" enable="true" shutdownTimeout="90" delayNotificationTimeout="5" waitChangeNotification="0" maxWaitChangeNotification="0" enableHeaderChecking="true" sendCacheControlHeader="true" apartmentThreading="false" />
But still gets same error: what else i ahve to change.
Please help!!
Thanks
Hi,
We are not able to export large data in PDF/Excel.
We are getting request time out error.
We are able to download large data in XML format.
Your inputs will be of great help.
Thanks & Regards,
Kiran Kirdat
I had write a ActiveX service to delete several tables and those records are more than 100000. When I test it by deleted 1000 records it is ok, but once the volum is increase until 100000, it will give me a error message said timeout operation fail.
how can i overcome this problem. please!!!!
While running query below on SQL Server 2005 (Build 3790: Service Pack 2):
SELECT DISTINCT pkg.PrimaryBarcode
FROM dbo.Package AS pkg (NOLOCK)
JOIN dbo.PackageCycle AS pc (NOLOCK)
ON pkg.PackageKey = pc.PackageKey
WHERE (pkg.BillCycleDateKey >= 20061201) OR
(pkg.BillStatusKey = 1)
I received a partial result set followed by
An error occurred while executing batch. Error message is: Couldn't replace text
I suspect this is a memory issue, but cannot find any reference to this particular msg on the Microsoft forums or the other 3rd party forums. PrimaryBarcode is a varchar(50)
I am not sure where to go from here. I would appreciate any ideas. Thanks in advance.
Cheers,
Mike Byrd
Hello:
I have designed a CV database with complete CV stored in a TEXT field. There is a keyword search which queries the TEXT field also. The query conditions are defined in T-SQL submitted through an ASP page. There is about 20,000 records now. Now while querying the database for keyword search I am receiving time out errors. Is there any solution other than Index server to rectify this situation. How can I speed up the query execution time. Please advise.
Rgds
pooja
I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.
I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?
I have  a large fact table about 500 million rows, and I am using 2008 r2, thus I am having the file system error, I have browsed online and tried all the fix , but I am still having the error . I tried taking only about year data (which was still around 200 million records) and  I was still having the error.
View 11 Replies View RelatedHi,
I’m attempting to use DTS to import data from a Memo field in MS Access (Jet 4.0 OLE DB Provider) into a SQL Server nvarchar(4000) field. Unfortunately, I’m getting the following error message:
Error at Source for Row number 30. Errors encountered so far in this task: 1.
Data for source column 2 (‘Html’) is too large for the specified buffer size.
I also get this error message when attempting to import the same data from Excel.
Per the MS Knowledgebase article located at http://support.microsoft.com/?kbid=281517, I changed the registry property indicated to 0. This modification did not help.
Per suggestions in other SQL Server forums, I moved the offending row from row number 30 to row number 1. This change only resulted in the same error message, but with the row number indicated as “Row number 1�. (Incidentally, the data in this field is greater than 255 characters in every row, so the cause described in the Knowledgebase article doesn’t seem to be my problem).
You might also like to know that the data in the Access table was exported into this table from a SQL Server nvarchar(4000) field.
Does anybody know what might trigger this error message other than the data being less than 255 characters in the first eight rows (as described in the KB article)?
I’ve hit a brick wall, so I’d appreciate any insight.Thanks in advance!
What happens if you have a website and the hard drive on your server is say 250GB. Then the database exceeds that and the database is 300GB in size.How would you spread your database into two different hard drives?Thanks
View 2 Replies View RelatedWe currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.
The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.
Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?
I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.
Thanks for any help you can provide.
George M. Parker
Hi, anyone who administering the pretty big database not less than 30 Gb with the average number of rows in a table about 2M and more, please share you experience with maintenace of such a db. Esspecially i'm interesting in:
1) Indexes maintenance (When and how - just regular dbcc, maint. plan or some script to split the job twice and so on.)
2) Remove unused space from db. (not major)
The serever works 24*7, and it's transactional environment. SQL 7 sp3 on claster.
I run the sp. to rebuild all the indexes it takes about 2-3 hrs to determin the objects withfragmentation less than 80% and actually rebuilding, during this process the users experience the performance (specially for update/insert) problem. It looks like I need to change the plan or strategy to do this. Any thoughts appreciated!
Thanks in advance.
Dmitri
Hi,
How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.
I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.
Thanks.
Hi,
I have inherited some databases whith extremely large Log files.
I tried the truncate transaction log but did not work.
Can some body please tell me how to truncate these log files.
Thanks in advance.
Attaullah
Hello,
I would like to know whether MS SQL Server 2000 Can handle Large Database size 28 gig, currently we are using Sybase but we want migrate to SQL Server 2000. Are there any performance issues?
Any help is appreciated.
Thanks
Sejal
How can I find largest 5 or 10 tables in a database?
Thanks in advance
Chan
Hi,
I want some help for this particular scenario.
I have more than 350 Databases growing on an average of 5 Databases per month.
I am concernd about the DR Strategy to be adopted.
Can any one help me
OS - Window 2003 Standard
SQL - 2000 Standard SP4
2 CPU 4 GB Memory
We have a table this is approximate 155 GB with 450 million rows. 90 days worth of data and normalized pretty tight. Obviously this beast is too much to handle (reporting, maintenance) so we are partitioning the data off into weekly tables. I will never need to store more than 90 days worth of data so at the end of each week, we will truncate the oldest week's table and change the constraint on it to handle the next week. Anyway, We have done this and it is working well in 2 testing environment with a lot less data.
I need to migrate the data from the 1 table 450 million rows to the 13 weekly tables that will approximately make up the 90 days worth of data. I hoped to BCP out the data and Bulk Insert (fast mode). Even 1 weeks (Approximate 35 million rows) goes forever and and don't even see it start to output. I have changed the batch to be larger. Is there a better/faster way to do this or am I just going to have to wait. I'm open to any suggestions.
Note: Hardware is all Raid 1+0, reading from 1 array and writing to a different array. Performance just seems to slow but I'm not sure what to expect from something this big.
Ideas.
We have many installations of our shopping cart database. One specific database is huge, now about 25 GIG, compared to the others that range from 20 to 75 MEG. The server this one resides on has three other instances of the same database that are normal size.
In a particular table in the large database there are 9700 rows taking 380MB, the same table in a normal db has 162000 rows and takes 6.MB. The tables are identical and the indexes are the same.
Any ideas out there?
I have this large DB around 250GB. In which 70% of the space is occupied by one single table. This is not a high transaction DB, it has only a few users.
Backup is taking 2:30 hours!
Is there a way I can reduce this time?
In desperation, I am thinking of doing,
1. detach the DB (off peak hours)
2. copy .mdf and .ldf files (backup)
3. attach the DB
Any suggestion ?
------------------------
I think, therefore I am - Rene Descartes
I often in my job come across the following scenario:
Client rings up and says Run out of server space due to SQL 2000 Transaction log has consumed all the space or has consumed a very large portion of it.
what is the correct procedure in resolving this ASAP working with Full mode SQl 2000 Databases. as i have other
guys within my company that all do different things with good result and sometimes bad results.
The procedure i use is the following:
METHOD 1
1.backup both database and transaction log
2.Right click the database and select Detach, which from my understanding is a clean detach method which ensures that uncommited transactions are commited to the database.
3. Rename the old transaction log to .ldfOLD
4. REATTACH Database which creates a new transaction log.
METHOD 2
1. I dont use this method but im pretty sure its risky and if possible
can someone provide me with the reasons why:
1.Change database method from full to simple mode, shrink logs
and then change back to full mode.
basically what i am asking is what is the fastest way to sort out the above issue most effectively and with the abilty to Roleback succesfully.
PLEASE dont comment on why is the transaction log so big as i dont want to look into that now all i am sking is what is the most effective method to shrink the log down and save space.
I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).
So my possibilites are:
1.- reseed the id when it is about to run out.
2.- make the id bigint
3.- remove the id and change the PK to 2 other fields
4.- remove the id and without PK
I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).
I also like option 3. I already have a index on one of the other fields (time).
Any input will be appreciated.
Claudio Robles
Hi,
I have a table which has over 450,000 records in it, I have now split this into 4 so each table has around 100,000 records in it but I'm still having the problem of the data being returned really slowly.
What I need to do to this data is group it by a code and show the total for each code for every month of the year (this is currently based on one column and selecting the data accordingly). I have created views and put some indexes onto my table but the results are still being returned slowly. Does anyone have any suggestions of how I can speed this up?
Thanks
Gemma
I'm trying to export data from an SQL table using classic ASP. In the past, I have created tab delimited text files using code similar to the following code:
Response.AddHeader "Content-Disposition", "attachment;filename=results.txt"
Response.ContentType = "application/octet-stream"
Response.Write "Field1"&vbTAB
Response.Write "Field2"&vbTAB
Response.Write "Field3"&vbTAB
Response.Write vbCRLF
RSb.Open "SELECT * FROM tblData1 WHERE ID=1 ORDER BY txtField1", con, 3, 3
Do Until RSb.EOF
Response.Write RSb("txtField1")&vbTAB
Response.Write RSb("txtField2")&vbTAB
Response.Write RSb("txtField3")&vbTAB
Response.Write vbCRLF
RSb.MoveNext
Loop
RSb.Close
This always worked fine in the past because the tables were small. Now, the tables are exporting 100,000 records and getting errors. I'm setting the page timeout and the sql connection timeout, but I'm still getting errors. I'm not exactly sure what's happening, but the export stops after exporting about 5Mb. It varies where it stops, so I'm thinking it's timing out somewhere.
Is there a better way to do this? Possibly use an export function in MS SQL that would export faster?
TIA
- PR
If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.
View 1 Replies View RelatedDear Experts,What is the best way to do a large insert WITHOUT having direct accessto the machine SQL Server is running on? For example, imagine I wantto insert something like 20,000 records. If I were to have access tothe server, I could BULK INSERT into a temp table and then insert intothe destination table. But if I can't create a file on the server touse for BULK INSERT, what is the next best alternative to doing lotsof 1 record insert statements?Thanks,-Emin
View 4 Replies View RelatedHi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.
View 1 Replies View Related
Hi,
I have one column which contains some large data more than 30000 characters. When I exported the data to Excel file this, some data in this column showed as #######. This # are giving the problem in SSIS package. If I delete that data(####) then SSIS package is working fine.
How to solve this problem.
Thanks in advance.