Huh?!? Strange Behavior From The Sysaltfiles Table.
Jan 18, 2007
Try a little experiment. Partly to humor me, and make me believe I am not quite insane.
Step 1: Install SQL server 2000, such that the data files are not in the default location, but in a location with a shorter path (i.e. install the data files to E:MSSQL8).
Step 2: Run the following queries, and comment on any oddities:
select filename
from master..sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename)), filename
from sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename))
from sysaltfiles
where dbid = 2
I am guessing that #2 is some sort of odd effect caused by the fixed length data field, but I just want to make sure that other people get this oddity, and not just me. I have no idea what could be causing the third output...or perhaps the lack of it.
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet. Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ? ----- HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5 Windows 2000 Advanced Server - SQL Server 2000 SP4
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
I have 2 packages that for ease I'll call Parent & Child. The Parent package calls the Child package as the 4th step in the process. Once the Child has completed, the Parent has a few more imports that it does.
The Portfolio table is loaded in the Child package which is step 4 in the Parent package. Then in step 5 a few tasks utilize that Portfolio data for lookups.
The strange part is that there are probably 4 or 5 data tasks that do lookups against the Portfolio data in Step 5 (step 5 is a container). All but 2 of the data tasks retreive data from the Portfolio data. The other 2 don't find any data and just move on. Once the package stops, if I simply execute those tasks they run and load the data correctly.
It seems to me to be a caching or an isolation problem but I can't find a solution.
Hi all, I face a problem as follows: We have an application runnig on SS2K.We log every delete of
documents(from Archive table) in another table.Now it seems some of the rows have deleted strangely
without any delete log by our application.We assumed there is somebody who has direct access to
database and delete them manually(obviousely our app does not generate any log in this situation)But
there is no people.We check that with admins many times. Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think
our app flaws somewhere? Thanks a lot for your attention.
Hi all, I face a problem as follows: We have an application runnig on SS2K.We log every delete of documents(from Archive table) in another table. Now it seems some of the rows have deleted strangely without any delete log by our application.We assumed there is somebody who has direct access to database and delete them manually(obviousely our app does not generate any log in this situation)But there is no people.We check that with admins many times. Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think our app flaws somewhere?
Hi, We are noticing some strange behavior with MSSQL. I was hoping somebody can shed some light.
Since the past few days in our production database we have been getting the following error
Could not allocate space for object 'Person' in database 'PROD' because the 'PRIMARY' filegroup is full...
Some data on your system
The PRIMARY filegroup is 20G in size. And 80% of it is free. Also, the Primary filegroup is setup to auto grow and there is about 20G free space at the OS level. So, I don't think it has anything to do with the filegroup.
I started doing some research on the 'person' object (table), run sp_spaceused etc... to get some data. On a trail and error basics I run DBCC INDEXDEFRAG on the 'person' table and the error went away.
Questions
1. Why is the error misleading? Why does it say, the 'PRIMARY' filegroup is full? 2. Why am I getting this error and why does running DBCC INDEXDEFRAG fix the problem? 3. I can understand the index being fragmented and needing a defrag, but can MSSQL server actually fail with this error if the index is fragmented too much? 4. What data can I look at and prevent this from happening in the future?
Running SQL 2000 SP4 on Windows 2000 Server. When a SELECT query is executed in Query Analyzer results are displayed in the results pane, fine...when an "ORDER BY" clause is added to the select stmt the query runs for apprx. 20 seconds then displays "TempDb log is full [Error 9002, Severity 17]". (The tempdb is set to autogrow/10%/unrestricted and plenty of storage space) The next time the query is executed after getting the "tempdb log is full" error, the server reboots upon query execution. As soon as F5 or ctrl-e is pressed to execute the query the server does a hard crash - black screen then reboot...no warnings, no event viewer log, no sql log warnings/errors, no drwtsn log, no hardware log message errors...nothing.
Re-applied Windows 2000 SP3 and SQL SP4 to server, same behavior.
I have some code, that just works. But when I put it into a exec() I get a strange error. First the code
exec (' select year,quarter, min(price) as minimum into #temptable from ( select ntile(4) over (partition by year,quarter order by price) as rang ,year ,quarter ,price from ( select distinct id,year,quarter,price from #tbl1 ) as a ) as b group by rang,year,quarter
Select year ,quarter, ( SELECT CAST(minimum as varchar(max)) + "," FROM #temptable t2 where t1.year=t2.year AND t1.quarter=t2.quarter FOR XML PATH("") ) from #temptable t1 group by year,quarter
')
SQL Server says, that Insert Into is missing a column name. It points at line with FOR XML PATH(""). Any Idea what's wrong here?
The Output without exec (and correct quotes) looks like:
I created a very simple SSIS package (it just updates a single row in a table). When I execute the package from the command line (using dtexec), it takes about a second to finish, as expected. But when I execute it using dtexec via xp_cmdshell, it takes about 91 seconds. When I use a SQL job to execute the package as an operating system type, it takes 91 seconds. Using a SQL job to execute it as a SSIS package takes again 91 seconds. It appears that something is causing a delay of about 90 seconds before the package actually gets executed. I tried changing the SSIS service account, but that didn't change anything. Why is executing the package through SS2005 different than executing it directly from the command prompt?
I have a trigger on each table in a database which updates a datetime column (lastupdatedon) and a varchar field (enteredby) after update on each individual table. The problem is, when one table is updated at the same instant as another table (by different users), the same varchar data (SYSTEM_USER) is put in both tables, even though the users are different.
Here is an example of the trigger:
CREATE TRIGGER EventUpdate ON jrowley.Event AFTER UPDATE AS UPDATE jrowley.Event SET LastChangedOn = getdate(),EnteredBy = SYSTEM_USER WHERE EventID in (Select EventID from deleted)
Ok maybe someone smarter than me (not difficult) can help me out :)
Two queries:
#1:
select a.load_id, b.attribute_name, a.attribute_loc, b.attribute_loc from PCI_Template_NR_Map b left outer join PCI_Master a on a.attribute_name=b.attribute_name and a.load_id in (select distinct top 53 load_id from PCI_Load')
#2:
select a.load_id, b.attribute_name, a.attribute_loc, b.attribute_loc from PCI_Template_NR_Map b left outer join PCI_Master a on a.attribute_name=b.attribute_name and a.load_id in (select distinct top 54 load_id from PCI_Load')
#1 Produces a correct left outer join, any values in PCI_Template_NR_Map that are not in PCI_Master show null. This works for any number of load_id values in the subselect up to 53.
#2 Is the exact same query, except I am no longer limiting it to 53, when i get to 54 (or if I take away the top altogether) it returns rows as if it were a normal inner join, instead of a left outer join (i.e. it only shows rows that are a match between PCI_Master and PCI_Template_NR_Map).
Can anyone explain to me what is happening here, and how to get around this issue? I need to be able to filter this for as many load_ids as I need (usually aobut 200). Thanks in advance,
I have a fairly complex report. I have a couple of sub reports on the left hand side, a table in the middle and a couple of rectangles on the right side of the screen. When I try to add another sub report, even though I make it about a one inch square, it pushes the rectangles three or four inches further right when I display it.
I have a data flow two lookups components (call them lookup1 and lookup2). They both query the same relational table but with different values. Each has a single row result set containing one column and the each of the two columns is mapped to a corresponding package-level variable. The original data flow sequence had lookup1 executing after lookup 2. Each component redirects errors to a separate text file.
Lookup1 succeeds but lookup2 fails on every row which populates its error text file; however I can construct a sql query from the lookup2 values that returns the expected result.
If I reverse the sequence of components (lookup2 followed by lookup1) lookup2 still fails on every row. Whenever both lookups are present in the dataflow, lookup2 fails for every row and all its rows are redirected to the error text file
Now this is where it gets interesting. If I omit either lookup1 or lookup 2 from the data flow, it works. If the data flow contains lookup1 only the destination is populated. If the dataflow contains only lookup2 then no errors are written to the error text file, all lookups succeed, and the destination is populated.
I'm stumped. Is it possible that both lookups selecting from the same table could cause a problem? Each works independently, but when both are together in the data flow, lookup2 fails for every row. I've been over the configuration and code a dozen times and am positive there are no errors; besides, lookup2 runs fine if lookup1 is excluded from the data flow.
In my package I have a source, a script component to make some changes to that and a destination. To speed up the process, within a data flow, I have created 6 copies of the above components and running them in parallel. Each source takes different set of data. I have divided the data using the record no such that, each set will read 1million records.
Now, my question is, though each pipleline is supposed to process exactly 1million records, they are not running at the same speed. For example, 1 pipeline completes processing all 1million records whereas another pipeline processed only 250000 records in that time. I don't see any reason for why one should run slow while another is running fast considering that both are doing the same thing?
I have a stored proc with the following two lines of code:
Select @SumCredits = (Select Sum(CreditAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
Select @SumDebits = (Select Sum(DebitAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
If I execute this stored proc via Query Analyzer, it will take about 11 seconds. If I execute the above two SQL statements indiviudally within Query Analyzer, each takes less than a second (the entire stored proc should take about a second). This hasn't always been happening. Just recently this behavior started occuring - after we imported a large amount of data into our database. However, I don't know if the two events are related.
Hello,I developed a win32 .exe CGI that connects to a clustered SQLServer toreport some data.The software is written with Borland C++Builder.This is the oledb string:Name=Provider=SQLOLEDB;Password=xxx;Persist Security Info=True;UserID=xxxx;Data Source=xxxxx;Initial Catalog=xxxxx;NetworkLibrary=dbmssocnIt suddendly stopped working on my customer network, so I made sometest and I verivied that the problem is on the connection withSQLServer: my test program just opens a connection, closes it andexits, reporting in a log file if the open was successful or failed.If I run the program locally, just launching it, no problem, it works.I can run it mutilple time continuosly and is connects every time.If I run the program through my webserver, as a CGI that's how it issupposed to work (http:\localhostscriptsconnect.exe), it connectsthe first time, and then I have to wait 40 seconds to connect againsuccessfully, or it fails.If I try against MY sqlserver on my laptop or on my network no 40 secproblems, but on my customer network, with THEIR SQLServer , if I tryto connect from their webserver, or from my laptop webserver, I havethis 40sec problem.I analyzed the network traffic, and I discovered that when I run mytest program locally it originates only TCP/IP packets, and SQLServeranswers only with TCP/IP.But when I use it from the webserver as a CGI, it originates an UDPpacket, then SQL answers with another UDP packet, and then theycommunicate over TCP/IP.This when it works: the second time my program continues to send theUDP packet, but it receives no answer, and fails the communication.I can only say that:- we haven't touched the program for months, and it really stoppedworking suddendly, so I suspect that something in my customer netowrkhas changed- I tried many different OLEDB strings, disabling connection poolingand all the services, calling the SQLServer by name or IP...- the problem can't be related to my program, because now is reallyjust an oledb connection testAnyone have an idea?Thank you very much,Mattia
Has anyone expirenced this? I'm upgrading my SSRS 2000 SP2 instance to 2005 SP2 and I have to "upgrade" the database twice (First time I click "upgrade" in configuration it throws an error).
So, I get report manager to open. But, when I go to run a report I receive an rsInternalError exception. I look in the log and see that a few columns are marked as NOT NULL and the system is trying to push null values into the columns. The colums are:
I also get serious blocking on the key on chunkData table in the tempDB as well. this was a serious issue that I had a ticket open with Microsoft about. As I am finding is typical with the helpdesk with SSRS I had to fix it myself by adding locking hints on the CreateChunkAndGetPointer sp. Changing the last line to:
Code Snippet SELECT @ChunkPointer = TEXTPTR(Content) FROM [ReportServerTempDB].dbo.ChunkData AS CH with (nolock) WHERE CH.ChunkID = @ChunkID
stopped the blocking issues that I was having.
Now, my question is this, is this normal or do I have a hosed installation of SSRS 2000 that I should just take the time to scrap and start new? Should I have to do this just to get reporting services working after upgrading?
PS - I found a bug in the upgrade script that is created for running later. if ReportServer is not the name of the database, the script will still use ReportServerTempDB in on of the stored procedures that is created.
Any input on this would be GREWATLY appreciated. Any input from MS would be better!
i hope that You could help me, because i must build a release version of my SSIS package in 2 hours..
This is my big issue:
i've a SQL Server 2005 (SP1 64bit version) and a remote SQL Server 2000 (SP4).. I've created a Data flow task that contains an OLEDB source (native client on SQL 2005 local server) and a Lookup Transformation that has to connect to remote SQL Server 2000 and returns me some columns.. I see all, tables, columns.. no problem at first sight.. I choose some columns to use in Lookup output and confirm.. When i re-open my lookup task, columns are not selected (checked).. obviously my oledb destination can't see that cols.. I choose, but my SSIS don't store any cols in output..
I've tried using error output, advanced editor, creating another package with sql 2005 tables only.. but no change.. the same silent error.. i cant use the columns for output to add to my sets..
can you help me plz?? i extremely need you a lot!!!
Hi, I came upon a strange behavior of Format function in report, which I'm unable to explain.
I have some double value, which I want to format. E.g. the value is in db field "Users" and is 303870
1) If I set a Value property for the field to =Fields!Users.Value or CDec(=Fields!Users.Value) //the value is already double so the CDec has no effect
and Format property for the field to =Format(me.Value,"#,#") or =Format(Fields!Users.Value,"#,#") than the number which is visible in the report is 3303,873870 ???
2) If I put in the Value property =Format(Fields!Users.Value,"#,#")
and the Format property is empty than the output is ok 303,870, however this is not desired, because the value is than handled as string
3) If I put in the Value property =Fields!Users.Value
and into Format property =Format(CStr(me.Value),"#,#") than the output is ok again.
As I understand, the Format should be same as in VB (I am actually C# programmer, not very familiar with VB) so I tried to use Format in VB on double value, and the output was as expected (e.g. 303,870), but when I used to a string I got only the style (e.g. #,#).
So I wonder, the ReportingServices Format function works correctly only with string input? But why than works the example 2)? Or do I have somewhere a mistake?
I have two SQL Server Instances on two servers. One server is my webserver and database server and the other one is just a database server. i have an application that calls a stored procedure located on the webserver/database server that runs a query on the OTHER database server. I use linked tables in my first instance to make the call possible.
Everything was working just fine for months until the database server was restarted and the IP address was changed. The name of the database is the same however and my first SQL Server instance has no problems running queries on the other databases tables. However, when you try to run the application i get the following error:
Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection
I have mixed mode authentication selected and my security uses the security context with username=sa and password=sa.
So here's the weird part.
The application will only run correctly when i manually run a SQL command from my webserver's SQL Analyzer on the linked SQL Server. however, after a few minutes, the same error comes back!! so as a temporary fix, i scheduled a dts job to run a simple query on the linked server every two minutes, so the application keeps working! It's almost as if the webserver's sql server forgot that the linked server is there, and by running a simple query in query analyer, the connection gets refreshed and everythings normal again - for about 3 minutes!
I am completely stumped by whats happeneing and appreciate any help. Thank you.
I'm a wee bit of a newbie concerning DTS and have inherited a db with a DTS containing a Copy SQL Server Objects task set to run nightly. Essentially, it does an informal backup of some core data.
Recently, I was notified that one of the tables it copies over is now empty on the Destination db. The DTS shows that it runs successfully with no errors logged, the table in question IS selected to be copied from the Source database, there IS data in the Source database table, and every other table in the Destination database is populated appropriately.
Any ideas on what would cause this one table to be empty without generating any errors?
I'm using Access as the front end to SQL2000.I have a table of contacts. UniqueID is the PK.There's also a column named "CreatedBy" int and a column Categorysmallint. They're all set to no nulls. If I create an index onCategory, an SP that searches the CreatedBy column fails and I get anerror message: 2757 There was a problem accessing a property or methodof the OLE object.If I delete that column from the index then the opposite happens withthe same error. I have even deleted the columns, saved the table, andrecreated the columns. This is an aggravating one!
We tried to create a new database on an application server (Win 2003 server/SQL Server 2003) and got the following error.
error 945....
.... device activation error. The physical filename g:m ssqldata emplog.ldf may be incorrect.
The interesting thing is that tempdb is on f:mssql as shown in the database properties and with sp_dbhelp.
Poking around in master the real wierdness comes through. sysaltfiles has 2 entries each for tempdb logs and data files. One of them is on g: and one is on f:. The lower dbid is on f: and the higher one is on g: (actually the last two rows in the table).
Several months ago our software vendor moved tempdb from g: to f: to try and speed it up a bit. Appearantly they messed it up and now have written us off till WE fix it.
The entries in sysaltfiles were the only references to g: that turned up (though we didn't look at every table and aren't even remotely sure where other references might be located).
Any pointers on getting this corrected would be greatly apprecieated. We thought about trying a reconfigure and restarting but I'm not real hopeful. We also thought about just updating the wrong entries to reflect the right locations but that smacks of kluge.
Tangential wierdness is that while trying to isolate the source of g:mssql in the error I found that in master.sysdevices the file location is e:Program FilesMicrosoft SQL ServerMSSQLdata empdb.mdf.
I believe this is from the initial install then while configuring the server it got moved to g: then to f:.
I'm taking the Administration of a DB wich it has on system table sysaltfiles some leftoff files that are not being used anymore on TempDB,
how can i remove them ? Every time i restart the SQL Service it tries to open those files on sysaltfiles..
I tried ALTER DATABASE tempdb remove file XXXX , it did not work...
I got this error:
ALTER DATABASE failed. Some disk names listed in the statement were not found. Check that the names exist and are spelled correctly before rerunning the statement.
I succesfully did a MSSQL "file" restore of production to a different node yesterday. But failed to apply any transaction logs, it complained that one of the files have not been restored. On further investigation I found that one of the files are missing in sysfiles, but the file is in sysaltfiles.
This SQL statement does not return the same number files. SELECT * FROM <DB>..sysfilesSELECT * FROM master..sysaltfiles WHERE dbid= DB_ID('<DB>')
sp_helpdb '<DB>' gives the same result as sysfiles.
HI All,I have started sqlserver in single user mode and changed filename in sysaltfiles for tempdb too point to new location. While starting sqlserver in normal it points to the old path and doesnt get updated with the new path.1.)is there any systable still to be altered???2.) i have even tried alter database, doesnt work.3.)The master files have been taken from server1 (where tempdb points to d:data) to server2 ( where i need to point tempdb to point E:mssqldata). i can detach and attach msdb and model successfully, but in the case of tempdb, i cant either alter sysaltfiles or detach and attach tempdb to new path., too tired in trying all the possiblities... Is there any possibility to update tempdb to point to new path???
if someone can point me to documentation on this I would appreciate it.....
If there isn't any....
I am wondering about the behavior of SQL Server for table scans. In other databases tables scans are not really table scans, they are scans of the underlying tablespace for all the rows that are in the table.....and if many tables are placed into the same tablespace then the obvious slowdown occurs as rows are scanned that are not in the table.
This used to be the case in server 7......but is it still the case in 2005 that if the explain says 'table scan' it will in fact scan the filegroup the table is in?
Some other databases also have a map of the row numbers and the table it is in and the optimiser decides whether to scan the data itself or to navigate through the map and fetch a row at a time depending on the stats....
It seems that the grahical explain does not tell me more than 'table scan'. Is there any way to see down to the physical level of what the optimiser is going to do?
I have a table. It shows "Rows: 2" when I open it in the table properties dialog. But there is no record in the table when I open it from Open Table -> Return All Rows.
I don't understand how this can happen. Any help is appreciated.
I have a table. It shows "Rows: 2" when I open it in the table properties dialog. But there is no record in the table when I open it from Open Table -> Return All Rows.
I don't understand how this can happen. Any help is appreciated.
One of my table, dbo.Customer, gives me different values of "row count" everytime I check the properties.
The dbo.Customer have around 12 000 rows but when I check the properties (or open the table) the value always differs. The table row count can have any value from ~9000 - 12000.
Its like the table are trying to load all rows but cant.