Puting Huge Chunk Of Data Into Database? Workable?
Jun 14, 2006
Hi.
I am trying to put a hugh chunk of text into my database for example information to a particular product which has more than 2000 characters. I had saw this datatype "nvarchar(MAX)" in SQL Server 2005 and was wondering if i can use this to store my text.
Thanks
View 1 Replies
ADVERTISEMENT
Dec 16, 2007
Hi,
I have to transform about 60 millions of data and it runs so slow that it never finishes in my testing. Should I have to process it chunk by chunk? Or is there any other techniques I can use (I am using data flow task). Thanks for advice.
View 12 Replies
View Related
Nov 2, 2005
Here is my dilema. I have a 120 GB database that I need to mask customercredit card numbers in. The field is a varchar (16). I need to updatethe field so that we only store the first 4 numbers and the last 4numbers of the credit card and insert * to fill in the rest of thecredit card number.I was going to do this as a loop using the following code:While Exists (Select Top 10 * From Header Where IsNumeric(CCNbr) = 1)BeginBegin Transaction T1UpdateHeaderSetHeader.CCNbr = Left (D1.CCNbr, 4) + '********' + Right (D1.CCNbr, 4)From(Select Top 10 * From Header Where IsNumeric(CCNbr) = 1) as D1Commit Transaction T1If Not Exists(Select Top 10 * From Header Where IsNumeric(CCNbr) = 1)BreakElseContinueEndIn theory this only selects the top 10 rows, updates them, dumps the logand moves on to the next 10 until all the rows are updated.I tried running this on my test database and it fills up the transactionlog.Can anyone tell me the best way to go about doing what I need?Thanks*** Sent via Developersdex http://www.developersdex.com ***
View 1 Replies
View Related
Jun 1, 2007
Following upgrade to SSRS2005, Reporting Services worked EXCEPT from within applications or from scheduled jobs. Running reports from application-generated URL€™s produced the following error: €˜An internal error occurred on the report server. See the error log for more details€™. These same reports, however, ran perfectly from within SSRS. After running them once from Reporting Services, they subsequently run without problem when called by applications or jobs.
Examples of these errors include the following (stack traces available if needed):
ReportingServicesService!runningjobs!13!5/27/2007-01:57:23:: i INFO: Adding: 1 running jobs to the database
ReportingServicesService!chunks!f!05/27/2007-01:58:34:: e ERROR: LockSnapshotForUpgrade: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
ReportingServicesService!chunks!1a!05/27/2007-01:58:34:: e ERROR: GetChunkPointerAndLength: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
ReportingServicesService!chunks!11!05/27/2007-01:58:34:: e ERROR: ### SnapshotConverter(00d68151-85e5-4669-a0de-28ed81bd091c, True), System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
Log entries below correspond to the following attempts to run a specific report.
ENF_Comprehensive_Report 5/31/2007 9:43:37 AM rsInternalError
(one of many user attempts to run from app after upgrade)
ENF_Comprehensive_Report 5/31/2007 10:50:57 AM rsSuccess
(first successful run after upgrade; run from within SSRS)
ENF_Comprehensive_Report 5/31/2007 11:09:59 AM rsSuccess
(first successful run by user from application)
It appears that after the upgrade Reporting Services was attempting to use non-existent or invalid chunks and snapshots to satisfy application or job originated requests. When first called from within SSRS, the old chunk was accessed but then appears to have been ignored, with subsequent calls running without a chunk. Our workaround for the problem was to manually run each of our 200+ reports from within SSRS to €˜initialize€™ them for applications and jobs.
Typical Application Request Failure for report:
w3wp!library!a!5/31/2007-09:43:22:: i INFO: Cleaned 0 batch records, 0 policies, 4 sessions, 0 cache entries, 5 snapshots, 46 chunks, 0 running jobs, 0 persisted streams
w3wp!library!a!05/31/2007-09:43:37:: i INFO: Call to RenderFirst( '/Enforcement/Historical/ENF_Comprehensive_Report' )
w3wp!chunks!a!05/31/2007-09:43:37:: i INFO: Returning old chunk for: (24788648-41c5-43d8-ba6b-409662211a37, 'CompiledDefinition', 0)
w3wp!runningjobs!6!5/31/2007-09:44:29:: i INFO: Adding: 1 running jobs to the database
w3wp!chunks!a!05/31/2007-09:45:37:: e ERROR: ### SnapshotConverter(24788648-41c5-43d8-ba6b-409662211a37, True), System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
ßstack trace entries here€”>
w3wp!library!a!05/31/2007-09:45:37:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException: An internal error occurred on the report server. See the error log for more details., ;
Info: Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException: An internal error occurred on the report server. See the error log for more details. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
ßstack trace entries here€”>
--- End of inner exception stack trace ---
w3wp!webserver!a!05/31/2007-09:45:37:: e ERROR: Reporting Services error Microsoft.ReportingServices.Diagnostics.Utilities.RSException: An internal error occurred on the report server. See the error log for more details. ---> Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException: An internal error occurred on the report server. See the error log for more details. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
First run of report from within Reporting Services following upgrade:
w3wp!library!19!05/31/2007-10:50:30:: Call to GetPermissionsAction(/Enforcement/Historical/ENF_Comprehensive_Report).
w3wp!library!19!05/31/2007-10:50:30:: Call to GetSystemPropertiesAction().
w3wp!library!19!05/31/2007-10:50:30:: Call to GetPropertiesAction(/Enforcement/Historical/ENF_Comprehensive_Report, PathBased).
w3wp!chunks!19!05/31/2007-10:50:30:: i INFO: Returning old chunk for: (24788648-41c5-43d8-ba6b-409662211a37, 'CompiledDefinition', 0)
w3wp!library!1!05/31/2007-10:50:31:: Call to GetSystemPermissionsAction().
w3wp!library!1!05/31/2007-10:50:31:: Call to GetPropertiesAction(/Enforcement/Historical/ENF_Comprehensive_Report, PathBased).
w3wp!library!e!05/31/2007-10:50:31:: Call to GetSystemPropertiesAction().
w3wp!library!1!05/31/2007-10:50:56:: Call to GetPermissionsAction(/Enforcement/Historical/ENF_Comprehensive_Report).
w3wp!library!6!05/31/2007-10:50:56:: Call to GetSystemPropertiesAction().
w3wp!library!6!05/31/2007-10:50:56:: Call to GetPropertiesAction(/Enforcement/Historical/ENF_Comprehensive_Report, PathBased).
w3wp!library!6!05/31/2007-10:50:56:: Call to GetSystemPermissionsAction().
w3wp!library!6!05/31/2007-10:50:56:: Call to GetPropertiesAction(/Enforcement/Historical/ENF_Comprehensive_Report, PathBased).
w3wp!library!6!05/31/2007-10:50:56:: Call to GetSystemPropertiesAction().
w3wp!library!1!05/31/2007-10:50:57:: i INFO: Call to RenderFirst( '/Enforcement/Historical/ENF_Comprehensive_Report' )
w3wp!webserver!1!05/31/2007-10:50:58:: i INFO: Processed report.
Subsequent application request for same report:
w3wp!library!11!05/31/2007-11:09:59:: i INFO: Call to RenderFirst( '/Enforcement/Historical/ENF_Comprehensive_Report' )
w3wp!webserver!11!05/31/2007-11:10:00:: i INFO: Processed report. Report='/Enforcement/Historical/ENF_Comprehensive_Report', Stream=''
Environment: 32-bit WK3 Enterprise R2 SP2 with single instance of SQL2000 SP3 hosting SSRS2005 SP2 metadata; IIS running in 6.0 isolation mode; separate dedicated IIS server; dedicated domain user accounts for service accounts and application pools. Over 200 reports deployed, half of which are run by users from within applications.
SSRS upgrade process: backup up existing databases, settings, keys; uninstall SSRS2000; perform €˜files only€™ install of SSRS2005 to existing SQL2000 instance; manually configure SSRS2005 and IIS; upgrade reporting services databases.
This is a followup of a previous posting to this forum 30-May 6:28 pm UTC: Problems following SSRS database upgrade.
View 2 Replies
View Related
Jul 7, 2015
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.Â
View 6 Replies
View Related
Feb 20, 2007
Hi all, I'm being driven to distraction by my inability to find a workable solution here, i'll try and lay out my problem as simple as possible:
My application analyses sentence structure, it stores hundreds of thousands of web pages, and stores all the words on those pages sequentially in a database with a primary id, the id of the page the word is from, and the word itself of course.
ie : id , parent_id , word
What i'm trying to do (which already works successfully in both SQL and MySQL) is to retrieve the word immediately before the one I specify and 3 words after the word I specify. Ideally in one union query but i'll accept it if that's too complex and go for two for now.
Because of the sheer size of the database ( millions of keywords ) I can't afford to retrieve every single record and feed off the top results in the application itself.
I've attempted to use subqueries:
SELECT yek FROM child_barrels A WHERE 3 > (SELECT COUNT(*) FROM child_barrels B WHERE A.yek < B.yek) AND id>'" + id + "' AND parent_id='" + parent_id[x] + "' ORDER BY id ASC
SELECT yek FROM child_barrels A WHERE 1 < (SELECT COUNT(*) FROM child_barrels B WHERE A.yek < B.yek) AND id>'" + id + "' AND parent_id='" + parent_id[x] + "' ORDER BY id ASC
These raise an error on the first select statement in the sub query.
(SELECT yek FROM child_barrels WHERE id>'" + id + "' AND parent_id='" + parent_id[x] + "' ORDER BY id ASC LIMIT 3) UNION (SELECT yek FROM child_barrels WHERE id<'" + id + "' AND parent_id='" + parent_id[x] + "' ORDER BY id DESC LIMIT 1)
Above directly is the original MySQL code.
I would appreciate any help available on this, thanks in advance.
View 4 Replies
View Related
Jun 7, 2006
I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios:
Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.
Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.
I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.
The result of this query would be exactly what I need to import.
How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.
In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server.
I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do.
Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!
Thanks in advance for your suggestions!
View 9 Replies
View Related
May 14, 2007
if I have the following code:
using Microsoft.SqlServer.Dts.Runtime;
using Microsoft.SqlServer.Dts.Tasks.SendMailTask;
class TestSendMailTask
{
public static void Main()
{
Package pkg = new Package();
ConnectionManager smtpCM;
smtpCM = pkg.Connections.Add("SMTP");
smtpCM.Name = "SMTP Connection Manager";
smtpCM.ConnectionString = "smtphost";
Executable exe = pkg.Executables.Add("STOCKendMailTask");
TaskHost thSendMailTask = (TaskHost)exe;
{
thSendMailTask.Properties["SmtpConnection"].SetValue(thSendMailTask, "SMTP Connection Manager");
thSendMailTask.Properties["ToLine"].SetValue(thSendMailTask, "someone1@example.com");
thSendMailTask.Properties["CCLine"].SetValue(thSendMailTask, "someone2@example.com");
thSendMailTask.Properties["BCCLine"].SetValue(thSendMailTask, "someone3@example.com");
thSendMailTask.Properties["FromLine"].SetValue(thSendMailTask, "someone4@example.com");
thSendMailTask.Properties["Priority"].SetValue(thSendMailTask, MailPriority.Normal);
thSendMailTask.Properties["FileAttachments"].SetValue(thSendMailTask, "C:\test_image.jpg");
thSendMailTask.Properties["Subject"].SetValue(thSendMailTask, "Testing the SendMail Task");
thSendMailTask.Properties["MessageSourceType"].SetValue(thSendMailTask, SendMailMessageSourceType.DirectInput);
thSendMailTask.Properties["MessageSource"].SetValue(thSendMailTask, "This is only a test.");
}
DTSExecResult valResults = pkg.Validate(pkg.Connections, pkg.Variables, null, null);
if (valResults == DTSExecResult.Success)
{
pkg.Execute();
}
}
}
-------
How do I make it a workable package so it compiles , w/ javadoc style comments and instructions, so, other people can use it?
View 20 Replies
View Related
May 20, 2006
The log file for my database in SQL Server 2005 is huge. How do I get empty it or in effect shrink it or start it over?Thanks
View 1 Replies
View Related
May 21, 2007
I have an 80 gig server. My client has sent me a backup of their db. The actual data file is 47 gigs. The client didn't truncate the transaction log, and on the backup , it is 37 gigs.
Question : when restoring this to my dev box, do I need to restore the transaction log, or, is there a way to truncate it so the 37 gigs isn't transferred to the server?
Thanks
View 11 Replies
View Related
Apr 3, 2000
SQL 7 SP1 NT4 SP5
I have a TRANSACTION table with 150 million rows.
I have a USER table.
Each user has about 600 records in the TRANSACTION table.
The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.
The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.
I want to go through the USER table and delete all users who have not visited me in a while.
I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.
The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.
Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?
Thanks !!!
View 3 Replies
View Related
Jun 12, 2006
Hi,
i have 4 tables, each consist of app. 10000000 rows.They have same columns (fTime[datetime] and bid[money]).What i wanna do is to collect all of datas into one of the tables, in ascending order by fTime.
PS i wanna do it as fast as possible as well
View 1 Replies
View Related
Nov 20, 2006
Hi,I've an application, lets call it simply "A", which creates in a Microsoft Sql Database two huge tables.Lets call them "table1" and "table2"It safes really much data into this tables.After application "A" has finished another application is executed which deletes this two tables.Then application "A" is started again and it will create this two tables again, but the amount of data becomes bigger.It can only proceed if the tables were deleted completely before and the database is empty. This is the procedure which I repeat very often, but everytime the amount of data becomes bigger (table1 and table2 becomes bigger).A couple if times it works fine, but once it seems data becomes too big and application "A" fails. Mostlikely because the data wasnt removed correctly / completely. This is my code of deleting the two tables, maybee there is something I have to change:</p><p> try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder("Server=mycomputerdbname;Integrated Security=SSPI;" + "Initial Catalog=testing"); builder["Server"] = "(local)dbname"; builder["Connect Timeout"] = 10; builder["Trusted_Connection"] = true; builder["Initial Catalog"] = ((ComponentConfiguration)this.componentConfig).Persistency.DatabaseName; SqlConnection sqlconnection = new SqlConnection(); sqlconnection.ConnectionString = builder.ConnectionString; sqlconnection.Open(); SqlCommand cmd1 = new SqlCommand("DROP TABLE table1"); // TO Do delete all tables SqlCommand cmd2 = new SqlCommand("DROP TABLE table2"); // TO Do delete all tables cmd1.Connection = sqlconnection; cmd2.Connection = sqlconnection; cmd1.ExecuteNonQuery(); Thread.Sleep(7000); cmd2.ExecuteNonQuery(); Thread.Sleep(7000); sqlconnection.Close(); Thread.Sleep(3000); } catch { }</p><p> </p><p> Thanks for help! mulata
View 6 Replies
View Related
May 31, 2007
Hi Good morning to all,
My day started with loading huge volume of data and my data flow task failed to do so.
My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.
I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.
the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.
I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?
Do we have any bulk load transformation in SSIS to load into DB2UDB ?
If so how do i get it installed?
Thanks in advance,
Suresh N
View 2 Replies
View Related
Mar 13, 2007
Actually in my transformation i am transferring huge amount of data.
i have been using oledb command finally to dump my incoming data to respective tables.
For Example :
if you have two tables
table 1,table 2
in my incoming data i have a lookup and check for two unique columns with that of the unique columns in the table 1.if the record does not exsist i try inserting a record into table 2 and get the unique filed of the record and store that in particular column of table 1.
the data is very large an is this the better why or any suggesstions do let me know..
View 5 Replies
View Related
Mar 20, 2007
Hello All,
I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.
The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.
Thanks in advance,
Atul
View 3 Replies
View Related
Aug 19, 2007
Hi All!
I have an Issue.
I am calling a rdl file through the Url and i am passing the Format=Excel in the Url.
Eg. http://harinarayana/ReportServer/......&Format=Excel&...........
If the data is around more than some 20000 records, its not able to export and
a Error like "The Service is not available " is being displayed.
Does anyone have any solution for scenarios like this? It would be of great help to me.
Regards
Hari
View 1 Replies
View Related
Mar 20, 2008
I am creating a database for an application through script. After the tables, views, and sp's are created, the database is populated with data. After all of this (and before the application is even run), the log file is about 700MB. If I shrink the database, it takes the log down to 1MB. The mdf file is about 165 MB before and after it has been shrunk.
I have two questions:
1. Is there something I should look for in my database scripts or is there a setting that could prevent this from being created so large.
2. Is there a script I can run in my sql code after the database has been created and populated to shrink it.
Thank you for your help
View 2 Replies
View Related
Jul 14, 2015
I have transnational replication setup on two environments, on one server distribution database is tiny, but on the second server the distribution database is 5 times bigger, and taking up lot of space, both environments have almost same size of data.
View 15 Replies
View Related
Mar 16, 2004
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.
View 14 Replies
View Related
Oct 12, 2006
Hi,
I have a situation where I have 4 tables:
1. 2 Dimensional tables(Parent), DIM1 with 50000 rows, and DIM2 with 1000 rows
2. Fact 1 with 50 columns, 25 Million rows and with FK to DIM1 and DIM2
3. Fact 2 with 40 columns, and 25 Million rows and with FK to DIM1 & DIM2 tables.
Actually the fact 1 and fact 2 have same related data but since our Analysis cube person wanted the fact table not to have more than 50 columns we divided the tables into 2, but they have the same compound key.
Above said, I have a situation where I have to select all the columns, in both fact tables, and do a group by. I wrote the query and ran "Analyze Query in the Database Engine Tuning Advisor" for it. It gave bunch of recomendations about the statistics and indexes which I created. When I executed the query the result came up in matter of seconds, which was good.
In the query I had a condition having MarketName='Bridgeview' and DateID = 344 (FK of today-1).
When I wanted the data for last 30 days I changed to DateID in ( > FK of today -32 and < FK of today), the query responded and worked fine.
But when I changed the query to get MarketName='Aurora' (other than I used when I ran Tuning Advisor), the result returned is empty set. When I removed the MarketName condition, it is supposed to return all markets' data, but it returns only Bridgeview data.
I know the data is in the table for all markets, since reports are rendered from these fact tables for all of these markets(also ran queries to check the fact table data).
I am unable to point out the reason why the query behaves like this. It responds to the date change, but not to the MarketName change.
I really appreciate if anyone can help me point out the problem.
Thanks,
Venkat
View 3 Replies
View Related
Aug 21, 2007
Hi all,
I've faced a problem with the below error when I load 1.5m data into oracle database.
The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers.
Please help. Thanks
View 19 Replies
View Related
Dec 3, 2006
Hi all,
In an approach of building an ETL tool, we are into a situation wherein, a table has to be loaded on an incremental basis. The first run all the records apporx 100 lacs has to be loaded. From the next run, only the records that got updated since the last run of the package or newly added are to be pulled from the source Database. One idea we had was to have two OLE DB Source components, in one get those records that got updated or was added newly, since we have upddate cols in the DB getting them is fairly simple, in the next OLEDB source load all the records form the Destination, pass it onto a Merge Join then have a Conditional Split down the piple line, and handle the updates cum insert.
Now the question is, how slow the show is gonna be ? Will there be a case that the Source DB returns records pretty fast and Merge Join fails in anticipation of all the records from the destination ?
What might be the ideal way to go about my scenario.. Please advice...
Thanks in advance.
View 13 Replies
View Related
Mar 13, 2008
I am actually a newbie to asp.net and i m using ASP.net 3.5 i.e, VWD 2008. i am using it for the first time as my tool to develop a website for my final year project. i am planning to develop an online job recruitment site like www.monster.com. Rigth now i am confused how will i manage my database. i've learned to use databinding concept of SQL SERVER in VWD 2008 but will it be enough to handle such huge # of Job postings and employers and as well as Resumes in pdf of word format? or do i have to create a separate databse in SQL Server and to connect it with my website? i am confused at the moment. please help me in this matter.
Regards,
Jigzy
View 6 Replies
View Related
Jan 24, 2008
Hi
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2
e.g
Sales tax = 10%
Use Tax = 10%
Share1 = 60%
Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax
So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on..
and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size.
Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 3:
1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or
2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier.
3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
Your help is greatly appreciated.
Thanks
View 3 Replies
View Related
Jan 24, 2008
Hi
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2
e.g
Sales tax = 10%
Use Tax = 10%
Share1 = 60%
Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax
So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on..
and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size.
Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 4:
1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or
2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier.
3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
4) Create a view just for reports, and calculate values dynamically in UI and not adding any computed values in table.
Your help is greatly appreciated.
Thanks
View 4 Replies
View Related
Nov 3, 2015
Have a database that's in "Simple" recovery mode whose .ldf has grown to 270GB. Â This database is a data warehouse so "full" is not required. Â I put it in simple mode a month ago and shrunk the log down and now it's filled up the disk.Â
What steps can I take to mitigate this in future? Â I've read that this is caused by long running transactions which fill the log for DR purposes. Â Should I put the database back into full mode and backup/truncate daily. Â
The auto-growth is set to 128MB which is very low.Â
View 3 Replies
View Related
Aug 30, 2005
Has anyone implemented split data for an application between two databases because the data size is extremely large? If so could you please point me to relevant information.In this split data scenario, a table will automatically carry over to another database whenever the size limit for the current database is reached. The challenge is here for the DAL (data access layer) to automatically look into the appropriate database when the next row of data is in another database. OR Perhaps there is another solution to this terasize data problem..Any help on this would be greatly appreciated.
View 8 Replies
View Related
Oct 31, 2015
I have a database data file almost at 2tb maxing out a windows drive. Only 16gb left. Should I just add another data file on another Windows drive for growth? Or just move current huge data file to a new GPT drive? Or do both adding another data file and moving existing to its own new GPT drive?
Primary objective is to make do for now.
View 1 Replies
View Related
Jul 20, 2005
I have a table which contains approx 3,00,000 records. I need toimport this data into another table by executing a stored procedure.This stored procedure accepts the values from the table as params. Mycurrent solution is reading the table in cursor and executing thestored procedure. This takes tooooooo long. approx 5-6 hrs. I need tomake it better.Can anyone help ?Samir
View 2 Replies
View Related
Dec 30, 2007
Hi.
I am working on a serial tracking application using Sql Server 2005 and .Net. One of the requirments is to have an ad-hoc file export utility in which users can drag-n-drop fields from a set of tables and export the results to CSV. It all sounds ok and Sql Server Reporting Services' Report Builder seem to be just the right tool for it, but there is one problem :
The report size is big, about 7K - 8K pages and 4 - 5 columns wide; while rendering the report, IIS memory usage shoots up to about 2GB and remains at about 2GB.
Any idea if something can be done to mitigate this problem? Note that I dont need the HTML rendering at all. All I need is to have the CSV at the end of the day, while users are able to chose columns in an ad-hoc manner.
View 7 Replies
View Related
May 14, 2015
declare @error int, @rowcount int
select @rowcount = COUNT(1) FROMÂ STG_BCDR;
while @rowcount > 0
begin
 BEGIN TRAN Deletion
[code]....
Above code i try to delete records batch by batch to avoid table locking at BCDR table.total records in this BCDRÂ table is 40,000 records. Â However I run the code at execution plan, the BCDR table still clustered index scan which means that the locking still happend.
If i change the delete top (5000)...... to delete top (5).... then thre is clustered index seek, which is good..The problem here is  each time  only delete top 5 records which is means it will realy take very long time to remove those data.
how to cater the situation inorder for me to delete those huge data without table locking happend. If table locking happend , then other user will not be able to access this table at the same time.
View 6 Replies
View Related
Feb 20, 2014
We have a Customized share point application with Very minimal data usage and we have used only 5 to 6 lists and libraries only in the share point.
Configuration is
Clients -- fire wall --- Load Balancer ---- WF1 and WF2 --- SQL DB
ROUTING IS VIA FIRE WALL.
SUDDENLY THE SITE GOT DEAD SLOW AND UNABLE TO TRACE THE PROBLEM AS EVERY THING LOOKS FINE.
Checked with the firewall Team and they stated its fine from their end & even we have verified the counters, CPU, Memory & Page life expectancy, buffer counters all looks good and even we do not have huge data in the database. We have only 50 concurrent users are working...
View 2 Replies
View Related