Keeping Breakpoint And Debugging
Feb 27, 2007hi, is there any possiblity to keep break point and run the step by step compilation as like we do in .net,if means please telll me how to do?
View 3 Replieshi, is there any possiblity to keep break point and run the step by step compilation as like we do in .net,if means please telll me how to do?
View 3 RepliesHi,
I have found that when I'm debugging a custom component in BIDS that I've created in another instance of Visual Studio, every time I rebuild the component I have to shutdown and restart BIDS and then reattach to the BIDS process. Which is pretty time consuming... And if I find a small error in my custom component when debugging then I don't seem to be allowed to make any changes to the code unless I stop debugging and go through the process above.
Am I missing something here? Or do I really have to manually go through these steps every time I want to change code in the component I'm debugging?
Can I automate the process with MSBuild or NAnt? If so, is there an example of this anywhere?
Thanks in advance,
Lawrie.
My SQLAgent keeps failing with a DR. Watson message(see above) with differen addresses. Sometimes the SQL Agent will restart and other times it stays down.
I get no messages in the Agent Log or the SQL Server log. I get one message in the NT Event Log "SQLServerAgent Monitor: SQLServerAgent has terminated unexpectedly".
Running NT 4 sp 5 w SQL 7 w sp 2.
If you look at the NT Services, it shows that SQLAgent is still running. But SQL Enterprise manager shows it stopped.
If you try to look at a scheduled job or a DTS package, you get the message that SQL Agent is starting and to try back later.
Have stopped the NT Services SQL Agent and then tried to start it with Enterprise manager. No luck.
Only way that I have found to get out of this mess is to restart the server.
Looking for clues
I'm trying to debug stored procedure code with SSDT, but the debugger don't stop at the breakpoint.
I imported the schema of northwind database from Denali to a new database project and used Run to publish it to localdb.
So, I inserted a breakpoint in SalesByCategory stored procedure, opened a new query window, write an execute statement and started the execution with debug.
The debugger stop at the exec statement, but never get into the stored procedure. I'm using a breakpoint in the stored procedure, already tryed F5, F11, nothing works.
Application Debugging option in localdb is checked.
The operating system is windows server 2008 R2. I'm still using CTP 3.
Did I forget to check/do something ?
Hi All,
How we breakpoint to the vb.net code to analyse the code. Is it possible to put breakpoint to stored proceudres so that analysis can be done.
Thanks
Abdul
I am still pretty new to SSIS, SQL, and DOT NET. I came from the UNIX world. I have a SSIS package. On the Control Flow one of the Items I have is a Script Task. I was able to successfully set breakpoints in the Script Task and they worked fine. I could step through the script and check values in variables. Life was good.
Now something happened. I set the breakpoint, from the menu I select €œstart with debugging€? and I get the following window:
----------------------------- -
Visual Studio Just-In-Time Debugger
An unhandled exception (€˜System.Runtime.InteropServices.COMException€™) occurred in DTAttach.exe [3380].
Possible Debuggers:
New instance of Microsoft CLR Debugger 2003
New instance of Visual Studio .NET 2003
New instance of Visual Studio 2005
[_] set the currently selected debugger as the default
[_] Manually choose the debugging engines
Do you want to debug using the selected debugger?
-----------------------------------------------------------
I have tried selecting Yes, but that doesn€™t work. If I delete all breakpoints the package runs fine. I greatly appreciate your help.
I'm having an issue with trying to delete breakpoints in my SSIS package. I think the breakpoints were created when I added dataviewers through my process, however the breakpoints were not deleted when I removed the dataviewers. It would then appear that my process would not halt on ANY breakpoints - orphaned or valid.
Furthermore, whenever I tried deleting the breakpoints manually, my whole IDE would crash. I got around the issue by closing the dtsx page first, then deleting the breakpoints, and reopening my dtsx.
Are these valid bugs with SSIS? Has anyone else experienced this? I'm running this against a SQL Server 2005 database using VStudio 2005 as an IDE.
Thanks!
Chris P
I have a child package which is executed several times within the same SSIS ETL. I have placed a break point on one of the child package's tasks, set to trigger on a PreExecute() event. The first time the child package is invoked, the breakpoint is triggered. However, on each successive invocation the breakpoint is ignored. Does anybody know if this behaviour is normal? Thanks in advance!
View 1 Replies View RelatedSSIS 2008 when I develop and debug in BIDS sometimes ignores debug break point.
The script component is in the main control flow and at some point the breakpoint did work.
If, for example, I create a new project and copy my script component there the debug breakpoint will work.So it's absolutely *random* when it works and when it does not.
Below is my BIDS detail:
Microsoft Visual Studio 2008
Version 9.0.30729.4462 QFE
Microsoft .NET Framework
Version 3.5 SP1
Installed Edition: IDE Standard
Enterprise Library v5 Configuration Editor  4.0
[code]....
I ran into a variety of problems trying to set a script task breakpoint in a package containing multiple script tasks. The debugger apparently treats the breakpoint as if it were set in ALL tasks in the package, not just the one in which it is actually set.
At best this results in hitting breakpoints in scripts where they are not set and at worst the debugger brings up the "Send error report" dialog and quits (while the package continues to run). The latter seems to happen most often when the script with the breakpoint has more lines than an earlier script and the breakpoint is set at a line number that exceeds the number of lines in the earlier script--it bombs when the earlier, shorter script starts.
To get the debugger to work under these circumstances I had to add some nonsense code like
While False
Dim i as Integer = 0
End While
to every script, at the same line number near the beginning of the script (line 40, for example). I then set a breakpoint on the middle statement in one of the scripts (doesn't matter which) to cause the debugger to open at runtime. It doesn't hit the breakpoint because the line is never executed. If the breakpoint is set on a line that can be executed in any script in the package, bad things tend to happen.
I then add a "stop" statement to the script that I want to debug. This only works if the debugger is already open, hence the dummy breakpoint above.
This workaround is usable, but I am debugging a package that has quite a few scripts and having to insert dummy code in all of them at a fixed line number is rather inconvenient. I would really like to see breakpoints work the way one would expect--only in the scripts where they are set.
Is there some other, easier way around this problem? Is there at least an easier way to get the debugger to open so that "stop" will work?
SQl7, sp3, NT4
How do I keep th job history of a job, say if I re-create the job?
We recreate the jobs often as part of a code move, but I'd like to retain the history of the previous jobs?
** sp_help_jobhistory -- only shows the jobs that exist, and not old jobs that no longer reside on the server.
Thanks,
AJ
Hi all, here are my goals: Have the same DB on two different stand-alone computers, and keep them up-to-date from each other.
Basically a user would input to a DB for a week. Then every week or two, update the other stand alone DB with the new input. The DB would be exactly the same.
What are my options for this? I'd like it as easy as possible! Are there any software packages that deal with this type of transfer, etc.? Thank you!
Hello
my table :
Report :
R_id (PK)
RName
RDate
i am having a few 10.0000 lines and i want to keep the last 10 (or less if not in the table) rows maximum for each name
i can have 100 report by name (100 rows with the same name and of course R_id and RDate are different)
how can i do it ?
thanks a lot for helping
Hi, I have the following code in a query:
SELECT [Issue date],DATEDIFF("dd",[Issue date],[Start date])/365 AS runningdays
FROM Database1..[Insurance Policies Working DB]
WHERE [Policy Number] LIKE '%1368529%'
The part 'DATEDIFF("dd",[Issue date],[Start date])' comes out as 364 if calculated on its own. However, then when it is divided by 365 it comes out as 0. How do I get it to show as a decimal instead of just rounding it down automatically? (Hope I've made sense)
Thanks in advance
we have a simple table
Key, Name, Address, City, State, Zip ................ect
I would like to keep this table sorted by Name, theirfore I won't have to sort my results with every querry.
I think I need to add something to my insert to tell my table - "Hay take Jones", open up the prober place and stick him in the proper spot.
Ex: We have Appleby and Robertson in our table now. My insert would tell SQL Server to take Jones, figure our where he belongs (alpha), and stick him in, resulting in.
Appleby
Jones
Robertson
This way I wont have to as the querry to sort stuff every time I reference this table, this will save lots and lots of overhead. and help keep my clients happy with quick(er) response.
thanks in advance -arthur
Hi
We need to keep track of all changes that are made to our tables.
The changes will be saved in a table that records:
- the table in which the change was made
- the name of the field that was changed
- the old data for the field
- the new data for the field etc..
I've seen a few examples that record the name of the table that was
modified but none that record done to the field level.
Can anybody give some guidance?
Thanks..
Wayne
I need to update a row but keep a lock on the table (so no one else can update it) while I do run some more code. In Oracle, it always locks whatever you update until you hit commit, but sql server works opposite. How do I tell it not to commit a statement, or how would I explicitly get a lock and then release it later?
View 4 Replies View RelatedHello All,
I have a problem concerning keeping track of a value within a query.
I have a table that tracks invoices recieved and payments made.
For each invoice number there may be multiple payments made against it.
I need something that will check and make sure that each invoice number has its payments equal to its received amount.
Any help would be greatly appreciated.
Thanks,
Hello everyone,
I have a winform application with C# front end and sql express 05 backend.
In this database i have a table that holds manufacturer provided pricing and the manufacturers we work with update pricing constantly.
We have one table called "manufacturerpricing" which we are constantly inserting and deleting pricing records to/from to keep manufacturer pricing up to date. We may insert and delete as many as 2,000,000 records per month into this table.
This works perfectly fine and we have no problems here at all.
But with that being said, I am worried about the size of the database growing out of control due to temporary space etc. The database just keeps getting bigger and bigger.
How do I run some maintenance to keep the database size under control.
I would like to run this automatically from the C# front end so if ther is a stored proc I can call or an C# assembly I can reference that would be ideal.
Any help is greatly appreciated.
Hi,
I'd like to keep state between calls to a UDF (mainly for caching purposes). I can shove an object into the appdomain using SetData and read it using GetData, but that requires the assembly to be set to UNSAFE. I'm confident I can secure the DB and the assembly fairly well, but I like defense in depth, and if there's another way to save state between calls to a UDF, I would prefer those.
Is there another way to store state between calls to a UDF, without putting data into DB tables or using things that will require the assembly to have such a wide permission set?
Thanks,
Alex
Let's say I develop a 'version 1' of a program/database and ship it to some customers.
Then I continue work on version 2 with some databasechanges etc.
When version 2 is shipped to the customers, they need a way to upgrade their database
from version 1 to version 2. I guess this is easiest done with one/several sql-scripts.
What I want is your opinion about the best way to keep track of the changes between
version 1 and 2. Here are two ways with pros and cons I've thought of:
1 - Keep manual track of every change (index, changed columns, relations etc) and update a scriptfile with all changes.
Pros: Full control of what is happening
Cons: Pretty much extra work and risk of missing some changes.
2 - When version 2 is ready, run some third party tool to get a diff between the databases
Pros: Fast and easy
Cons: The DB-changes may need to be applied in a special order, with default values, etc etc and I'm not sure all tools handle this in a good way...?
So... what is your recomendations? I'm sure this must be a headace in every development project?
Regards Andreas
This may be more of a data design question and not an ssis question, but figured folks here could have a good idea.....the organization I'm in has the business need of collecting data from outside organizations and tracking what data is bad and what data is good. When I say bad data I mean everything from things outside of range to absolute *** - characters in integer columns, integers in character columns, special characters, etc. The data comes in in the form of flat file so it's a free for all until it hits ssis & the db engine.
Eventually of course they work to get the data corrected at the source & resubmitted but in the meantime, they have the legitimate need of not only pushing the data into the database (dirty or not), but keeping all the bad stuff. I can't in good conscience make everything a varchar to catch everything - that would go against the database gods. IMO - I still must make an integer be an integer , characters are characters, etc. But what do I do with the junk? Any thoughts?
I am newish to databases and would appreciate some
advise. I think I have a solution to my
problem but it is going to take me a lot of time to get it running. If there is a better way of doing it I would
like to know.
I have a table :-
“eventDates� with
columns (id, date, eventID,
eventCount)
The id auto increments as a Primary Key.
date holds the date of the
event.
EventID references another
table with info about the events
Up to 9 eventIDs can be added
for each date and I want eventCount to hold an integer (1 to 9) to allow me to “pivot�
the data to the table below
“results� with columns (date,
eventCount1, eventCount2 …..eventCount9) so each row will hold a date and non
to nine eventIDs occurring on that date.
Is there an easy way to keep eventCount accurate or do I
just have to write a lot of code? I
will need to be able to remove events as well as add them. I will use a mixture of stored procedures
and VB.Net I guess?
Many thanks for any advice.
Mike
Hi, I am trying to find a way to capture all the status (Start time, execution time, Status messages etc) from executing a DTS package in to a table I will create in a database, does anyone know, where those information being kept?
When I excute the DTS package manually, a window will come up and show the status of each step within the DTS package. I am hoping to capture these information and load it to my log table.
Thanks in advance.
How can I instruct SQL Server to keep entire table in memory? ie the memory pages should not be swapped to HD.
View 2 Replies View RelatedI'm exporting a large database. In enterprise manager the settings on
all of the PK/FK relationships are that
"Enable relationship for Insert AND Update" is UNchecked.
I need it this way, so I can delete and insert to the tables
without being hassled by THE MAN.
When I export the database, using DTS I export all the objects (EVERYTHING),
and all the data too. When I open the freshly copied database and get
properties on any relationship "Enable relationship for Insert AND Update"
is CHECKED! ARGH!
How do I keep this from happening? I'm so frustrated.
It is very time consuming to uncheck that darn box on hundreds
of relationships. Why doesn't it just stay the way it is set in
the original source DB ??
Is there a way to export a database and keep it EXACTLY the same?
If anyone can help me with this it would save me dozens of hours in work.
Thanks in advance.
Josh
I have a trigger that keeps track of status changes...
IF UPDATE(STATUS)
BEGIN
DECLARE @currentdate datetime
DECLARE @currentstatus integer
DECLARE @UserID integer
DECLARE @PermitID integer
DECLARE @Status integer
[Code] .....
It works but not the way I want it to. The @currentstatus and @newstatus are the same. I want the status before and after the update. I asked around as to how to do this and some one told me to use the Deleted table.
I have just finished upsizing an Access database to SQL Server 2k. Now the SQL Server need to be run on a test basis to determine if i need to make more changes to the front-end (Access). The problem I am facing is how to keep the two databases in sync while I am testing. Any suggestions?
Also any suggestion or comments on how to run a test setup like this (in parrallel) are also welcome since this is my first time attempting a project like this.
Let me know if anyone needs more info.
Thanks in advance.
Hi all
I have a stored procedures with few DML stmts inside.
I need to log the success/failure of each DML statment into seperate audit table as follows
Begin
DML Stmt1
Log sucess/failure (Insert stmt to audit table)
DML Stm2
Log sucess/failure (Insert stmt to audit table)
end
when any of the above DML stmt fails, the entire transaction is getting rolled back including the audit information.
How can i keep the audit info intact when the transaction is rolled back.
Thanks.
Hey all,
I'm trying to set up a backup schedule in SQL 2005 that will keep 5 .bak files within the folder I specify. I can get it to keep one backup only even though I specify that it is to keep 5 days worth of backups. Each backup happens at 12:00 am. I think the problem I may be having is I don't know how to get the maintenance plan to name the file mmddyyyy.bak or whatever it may need to determine that the file is 5 days old. Currently I am specifying a name of gmsqlbackup.bak. Any idea's that may help me out?
I have a report that pulls a customer balance.
In crystal there was a way to have the page not show up if it met a certain criteria(say if the balance was 0 or negative).
I'd rather not filter them in SQL because it takes a few calculations to figure out what their balance is, and I already have SRS doing that calculation.
So is there a way to have a report page not print based on a certain criteria?
i'm importing tables from another sql server -
now the tables are coming is as
esther.tablename (my username on the other server)
instead of
dbo.tablename
how can i change this?
This may not be a MSSQL-specific question, butI wanted to ask it here first, in case there'sa MSSQL and/or SourceSafe solution that will help.Our dev team is having some difficulty withkeeping the nightly builds in sync with thestored proc mods. I'm wondering if there aresome good case studies on how to avoid this"drift". Something like genning a new DB fromchecked-in SPs, etc. alongside each regular build,then always have a paired enterprise app/databaseduo that is tagged and added to a history.FWIW, we have a 3-tier .NET/C# app, andADO.NET is throwing exceptions every otherday.If the suggestion is to whip the DB guys, thatworks for me as well. ;-)Nah, there's much love there.Thanks in advance,~swooz
View 2 Replies View Related